Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions Demos/Gemma-on-Cloudrun/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ Upload the contents generated for your custom model to your GCS bucket in corres
For example,
```bash
cd <your-local-ollama-model-dir>
gsutil -m cp -r . gs://YOUR_MODEL_BUCKET_NAME
gcloud storage cp --recursive . gs://YOUR_MODEL_BUCKET_NAME
```

#### 3. Deploy Cloud Run Service with GCS Volume Mount
Expand Down Expand Up @@ -333,4 +333,3 @@ for chunk in response:
print(chunk.text, end="")
```
Similarly for OpenAI SDK and curl examples, replace `<model>` with `<your-custom-model-name>`.

3 changes: 1 addition & 2 deletions PaliGemma/[PaliGemma_1]Finetune_with_image_description.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -292,8 +292,7 @@
"TOKENIZER_PATH = \"./paligemma_tokenizer.model\"\n",
"if not os.path.exists(TOKENIZER_PATH):\n",
" print(\"Downloading the model tokenizer...\")\n",
" !gsutil cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n",
" print(f\"Tokenizer path: {TOKENIZER_PATH}\")"
" !gcloud storage cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n", " print(f\"Tokenizer path: {TOKENIZER_PATH}\")"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This automated change has resulted in a malformed JSON structure for the notebook. The newline and comma separating two string elements in the source array have been removed, merging them into a single invalid line. This will prevent the notebook from being opened or parsed correctly.

        "  !gcloud storage cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n",
        "  print(f\"Tokenizer path: {TOKENIZER_PATH}\")"

]
},
{
Expand Down
3 changes: 1 addition & 2 deletions PaliGemma/[PaliGemma_1]Finetune_with_object_detection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -591,8 +591,7 @@
"TOKENIZER_PATH = \"./paligemma_tokenizer.model\"\n",
"if not os.path.exists(TOKENIZER_PATH):\n",
" print(\"Downloading the model tokenizer...\")\n",
" !gsutil cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n",
" print(f\"Tokenizer path: {TOKENIZER_PATH}\")"
" !gcloud storage cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n", " print(f\"Tokenizer path: {TOKENIZER_PATH}\")"
]
},
{
Expand Down
6 changes: 2 additions & 4 deletions PaliGemma/[PaliGemma_2]Finetune_with_JAX.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -424,14 +424,12 @@
"TOKENIZER_PATH = \"./paligemma_tokenizer.model\"\n",
"if not os.path.exists(TOKENIZER_PATH):\n",
" print(\"Downloading the model tokenizer...\")\n",
" !gsutil cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n",
" print(f\"Tokenizer path: {TOKENIZER_PATH}\")\n",
" !gcloud storage cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n", " print(f\"Tokenizer path: {TOKENIZER_PATH}\")\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This automated change has broken the JSON formatting of the notebook by merging two lines into one. A newline and comma are missing between the two string elements in the source array, which will cause parsing errors.

Suggested change
" !gcloud storage cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n", " print(f\"Tokenizer path: {TOKENIZER_PATH}\")\n",
" !gcloud storage cp gs://big_vision/paligemma_tokenizer.model {TOKENIZER_PATH}\n",
" print(f\"Tokenizer path: {TOKENIZER_PATH}\")\n",

"\n",
"DATA_DIR=\"./longcap100\"\n",
"if not os.path.exists(DATA_DIR):\n",
" print(\"Downloading the dataset...\")\n",
" !gsutil -m -q cp -n -r gs://longcap100/ .\n",
" print(f\"Data path: {DATA_DIR}\")"
" !gcloud storage cp --no-clobber --recursive gs://longcap100/ .\n", " print(f\"Data path: {DATA_DIR}\")"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Similar to the previous change in this file, the automated script has produced invalid JSON by merging two lines. This will break the notebook.

        "  !gcloud storage cp --no-clobber --recursive gs://longcap100/ .\n",
        "  print(f\"Data path: {DATA_DIR}\")"

]
},
{
Expand Down
Loading