You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: developers/tutorials/request.mdx
+62-32Lines changed: 62 additions & 32 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,17 +45,17 @@ export RPC_URL="https://..."
45
45
export PRIVATE_KEY="abcdef..."
46
46
```
47
47
48
-
#### Storage Provider
48
+
#### Storage Uploader
49
49
50
50
<Tip>
51
51
For this tutorial, we suggest using a Pinata API key which will upload your program at runtime.
52
52
53
-
If you do not want to use an API key, or if you want to use a provider other than Pinata, you can pre-upload you program to a public URL (this could be hosted via Pinata or any other service).
53
+
If you do not want to use an API key, or if you want to use a provider other than Pinata (e.g. S3 or GCS), you can pre-upload your program to a public URL (this could be hosted via Pinata or any other service).
54
54
55
-
To see more information about this option, please read [No Storage Provider](/developers/tutorials/request#no-storage-provider).
55
+
To see more information about storage options, please read [Storage Providers](/developers/tutorials/request#storage-providers).
56
56
</Tip>
57
57
58
-
To make a program, and its inputs, accessible to provers, they need to be hosted at a public URL. We recommend using IPFS for storage, particularly via [Pinata](https://pinata.cloud), as their free tier comfortably covers most Boundless use cases.
58
+
To make a program, and its inputs, accessible to provers, they need to be hosted at a public URL. We recommend using IPFS for storage, particularly via [Pinata](https://pinata.cloud), as their free tier comfortably covers most Boundless use cases. The SDK also supports [S3](/developers/tutorials/request#s3) and [GCS](/developers/tutorials/request#google-cloud-storage-gcs).
@@ -111,50 +112,76 @@ This will store the `journal` and `seal` from the Boundless market, together the
111
112
112
113
### Storage Providers
113
114
114
-
The Boundless Market SDK automatically configures the storage provider based on environment variables; it supports both IPFS and S3 for uploading programs and inputs.
115
+
The Boundless Market SDK supports multiple storage backends for uploading programs and inputs: **IPFS (Pinata)**, **S3**, and **Google Cloud Storage (GCS)**. The SDK uses `StorageUploaderConfig` with clap, so the storage backend is configured via environment variables or CLI flags.
115
116
116
-
#### IPFS
117
+
#### IPFS (Pinata)
117
118
118
-
For example, if you set the following:
119
+
To use Pinata for IPFS uploads, set the following environment variable:
_IPFS_ is set automatically to the storage provider, and your JWT will be used to upload programs/inputs via Pinata's gateway.
125
+
The SDK picks the storage backend based on which env vars are set. When `PINATA_JWT` is set, it uses Pinata to upload programs and inputs to IPFS.
136
126
137
127
#### S3
138
128
139
-
To use S3 as your storage provider, you need to set the following environment variables:
129
+
To use S3 as your storage backend, set the following environment variables:
140
130
141
131
```bash
142
-
exportS3_ACCESS_KEY="abcdef..."
143
-
exportS3_SECRET_KEY="abcdef..."
144
-
exportS3_BUCKET="bucket-name..."
145
-
exportS3_URL="https://bucket-url..."
146
-
export AWS_REGION="us-east-1"
132
+
exportS3_BUCKET="bucket-name"
133
+
exportS3_URL="https://s3.us-east-1.amazonaws.com"# optional, for S3-compatible services
134
+
exportAWS_ACCESS_KEY_ID="abcdef..."# optional, uses AWS default credential chain if not set
135
+
exportAWS_SECRET_ACCESS_KEY="abcdef..."# optional, uses AWS default credential chain if not set
136
+
export AWS_REGION="us-east-1"# optional, can be inferred from environment
147
137
```
148
138
149
139
Once these are set, this will automatically use the specified [AWS S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) for storage of programs and inputs.
150
140
151
141
<Warning>
152
-
The SDK generates S3 presigned URLs that expire after 12 hours. If your request takes longer to fulfill, provers cannot download your program or inputs after expiry. For long-running requests, use IPFS storage or set `S3_NO_PRESIGNED=1` to use direct S3 URLs with appropriate bucket policies.
142
+
By default, the SDK generates S3 presigned URLs that expire after 12 hours. If your request takes longer to fulfill, provers cannot download your program or inputs after expiry. For long-running requests, you have a few options:
143
+
144
+
- Use IPFS storage instead
145
+
- Set `S3_PUBLIC_URL=true` to return public HTTPS URLs (requires a public bucket)
146
+
- Set `S3_PRESIGNED=false` to use direct S3 URLs with appropriate bucket policies
153
147
</Warning>
154
148
149
+
#### Google Cloud Storage (GCS)
150
+
151
+
<Note>
152
+
GCS support requires the `gcs` feature flag: `cargo add boundless-market --features gcs`
153
+
</Note>
154
+
155
+
To use Google Cloud Storage, set the following environment variables:
156
+
157
+
```bash
158
+
export GCS_BUCKET="your-bucket-name"
159
+
```
160
+
161
+
**Authentication** is resolved via the [Google Cloud Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) chain:
162
+
163
+
1.`GOOGLE_APPLICATION_CREDENTIALS` environment variable pointing to a service account JSON key file
164
+
2. Well-known file locations (`~/.config/gcloud/application_default_credentials.json`, set up via `gcloud auth application-default login`)
165
+
3. Workload Identity on GKE, metadata server on Compute Engine, etc.
166
+
167
+
You can also provide credentials directly via `GCS_CREDENTIALS_JSON` when loading from a secrets manager without writing to disk.
168
+
169
+
**Configuration:**
170
+
171
+
| Environment Variable | Description |
172
+
|---|---|
173
+
|`GCS_BUCKET`|**(Required)** GCS bucket name |
174
+
|`GCS_URL`| Custom endpoint URL (for emulators like `fake-gcs-server`) |
175
+
|`GCS_CREDENTIALS_JSON`| Service account JSON string (bypasses ADC) |
176
+
|`GCS_PUBLIC_URL`| Set to `true` to return public HTTPS URLs (`https://storage.googleapis.com/{bucket}/{key}`) instead of `gs://` URLs. Requires the bucket to be publicly readable. |
177
+
178
+
<Tip>
179
+
For public buckets, set `GCS_PUBLIC_URL=true` so provers can download via standard HTTPS without needing GCS credentials. After each upload, a HEAD request verifies the object is publicly accessible.
180
+
</Tip>
181
+
155
182
#### No Storage Provider
156
183
157
-
A perfectly valid option for `StorageProvider` is `None`; if you don't set any relevant environment variables for IPFS/S3, it won't use a storage provider to upload programs or inputs at runtime. This means you will need to upload your program ahead of time, and provide the public URL. For the inputs, you can also pass them inline (i.e. in the transaction) if they are small enough. Otherwise, you can upload inputs ahead of time as well.
184
+
If you don't set any storage-related environment variables, no storage backend is configured. This means you will need to upload your program ahead of time, and provide the public URL. For the inputs, you can also pass them inline (i.e. in the transaction) if they are small enough. Otherwise, you can upload inputs ahead of time as well.
158
185
159
186
### Uploading Programs
160
187
@@ -164,7 +191,8 @@ Provers must be able to access your guest program via a publicly accessible URL;
Copy file name to clipboardExpand all lines: developers/tutorials/sensitive-inputs.mdx
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -199,12 +199,12 @@ With your inputs now sitting privately in S3, you may now [request a proof](/dev
199
199
If you have already uploaded your inputs using the `aws` CLI above, you can skip the information below. Otherwise, if you are interested in using the Boundless SDK to upload your inputs to your S3 bucket, you will need to:
200
200
201
201
- make sure your AWS credentials are setin environment variables, specifically:
202
-
- `S3_ACCESS`for the access key
203
-
- `S3_SECRET`for the secret key
202
+
- `AWS_ACCESS_KEY_ID`for the access key (optional if using the AWS default credential chain)
203
+
- `AWS_SECRET_ACCESS_KEY`for the secret key (optional if using the AWS default credential chain)
204
204
- `S3_BUCKET`forthe bucket name of the bucket createdin [Create the S3 bucket](/developers/tutorials/sensitive-inputs#1-create-the-s3-bucket)
205
-
- `S3_URL`forthe bucket URL of the bucket createdin [Create the S3 bucket](/developers/tutorials/sensitive-inputs#1-create-the-s3-bucket)
206
-
- `AWS_REGION`for the bucket region.
207
-
- and last, but not least, make sure `S3_NO_PRESIGNED=1`
205
+
- `S3_URL`forthe bucket endpoint URL of the bucket createdin [Create the S3 bucket](/developers/tutorials/sensitive-inputs#1-create-the-s3-bucket)
206
+
- `AWS_REGION`for the bucket region
207
+
- and last, but not least, make sure `S3_PRESIGNED=false` to use direct S3 URLs
208
208
209
209
After this setup, you may request a proof programmatically as [Request a Proof](/developers/tutorials/request) recommends; your inputs will be automatically uploaded to your gated S3 bucket, however remember that you still need to go through all the necessary gating policies as laid out in this tutorial to make sure your inputs are private and only available to selectprovers.
210
210
@@ -213,7 +213,7 @@ If you're interested in doing a one-off test, take a look at the [requestor modu
0 commit comments