regarding a public notebook "getting started w code generation and with openAi & Roboflow.
Notebook will focus on generic model ( my model w my access-key to RF api )
any notebook user will be able to step the code , making calls, seeing results on variety of internal API’s including roboflow generic photo classification , using the template code below:
from roboflow import Roboflow
rf = Roboflow(api_key=“1…wN”)
project = rf.workspace().project(“org311-clip-photos”)
model = project.version(2).model
This code example would be OK if every user of the notebook had their own RF setup and their own api key , but the purpose of the notebook is introductory, so in the code, the audience will be using my photo classification model on RF and therefor , my api-key.
I’m new to Jupyter notebooks but experienced with REST api’s & “bearer” authentication at the app level. I get that i can make some notebook code blocks read only where needed. But i tend to view the entire Jupyter notebook stack as being ‘client-side’ where im always reluctant to expose a secret credential.
For that reason i would tend to just wrap the sample python implementation into a generic , hosted api where i know how to secure the RF api-key properly. So that the notebook code block would append a photo, the payload and call a generic api ( hosted by cloud-stack , prototype REST api ) . api-key for RF would be std env value ( server-side) that i know is secure.
So , this rest api is just acting as a proxy that can manage credentials for variety of apis. And the code in the notebook would show the outlines of attaching a photo and calling for model’s classification via a third party hosted api
Is there some easier way to do this while not exposing my Roboflow api-key to curious users of the notebook?
Is it good advice to view the Jupyter Notebook env and the audience as legacy “client-side” where you instinctive to precaution when exposing credentials & Secrets??