Skip to content

Commit 8f5aa17

Browse files
Merge pull request #39 from boundless-xyz/docs/gcs-storage-provider
docs: add GCS storage provider, update SDK storage API references
2 parents cf816dd + 23871b3 commit 8f5aa17

File tree

3 files changed

+78
-43
lines changed

3 files changed

+78
-43
lines changed

developers/tooling/sdk.mdx

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,8 @@ let (journal, receipt) = client.fetch_set_inclusion_receipt(request_id, [0u8; 32
8989
- `OrderStreamClient`: Submit/fetch orders offchain via WebSocket.
9090

9191
### `storage`
92-
- Providers: `S3` and `Pinata` for uploading program and input data.
92+
- Uploaders: `S3`, `GCS`, and `Pinata` for uploading program and input data.
93+
- Downloaders: `HTTP`, `S3`, `GCS`, and `File` for downloading programs and inputs (auto-selected based on URL scheme).
9394

9495
### `selector`
9596
- Utilities for tracking/verifying proof types.
@@ -98,21 +99,25 @@ let (journal, receipt) = client.fetch_set_inclusion_receipt(request_id, [0u8; 32
9899

99100
```rust
100101
use boundless_market::{
101-
Client,
102+
Client, StorageUploaderConfig,
102103
contracts::{FulfillmentData, RequestId, Requirements, Predicate, Offer},
103-
storage::storage_provider_from_env,
104104
request_builder::OfferParams,
105105
};
106106
use alloy::signers::local::PrivateKeySigner;
107107
use alloy::primitives::U256;
108108
use std::time::Duration;
109109
use url::Url;
110110

111-
async fn proof_submission(signer: &PrivateKeySigner, rpc_url: Url) -> anyhow::Result<()> {
111+
async fn proof_submission(
112+
signer: &PrivateKeySigner,
113+
rpc_url: Url,
114+
storage_config: &StorageUploaderConfig,
115+
) -> anyhow::Result<()> {
112116
let client = Client::builder()
113117
.with_rpc_url(rpc_url)
114118
.with_private_key(signer.clone())
115-
.with_storage_provider(Some(storage_provider_from_env()?))
119+
.with_uploader_config(storage_config)
120+
.await?
116121
.build()
117122
.await?;
118123

developers/tutorials/request.mdx

Lines changed: 62 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -45,17 +45,17 @@ export RPC_URL="https://..."
4545
export PRIVATE_KEY="abcdef..."
4646
```
4747

48-
#### Storage Provider
48+
#### Storage Uploader
4949

5050
<Tip>
5151
For this tutorial, we suggest using a Pinata API key which will upload your program at runtime.
5252

53-
If you do not want to use an API key, or if you want to use a provider other than Pinata, you can pre-upload you program to a public URL (this could be hosted via Pinata or any other service).
53+
If you do not want to use an API key, or if you want to use a provider other than Pinata (e.g. S3 or GCS), you can pre-upload your program to a public URL (this could be hosted via Pinata or any other service).
5454

55-
To see more information about this option, please read [No Storage Provider](/developers/tutorials/request#no-storage-provider).
55+
To see more information about storage options, please read [Storage Providers](/developers/tutorials/request#storage-providers).
5656
</Tip>
5757

58-
To make a program, and its inputs, accessible to provers, they need to be hosted at a public URL. We recommend using IPFS for storage, particularly via [Pinata](https://pinata.cloud), as their free tier comfortably covers most Boundless use cases.
58+
To make a program, and its inputs, accessible to provers, they need to be hosted at a public URL. We recommend using IPFS for storage, particularly via [Pinata](https://pinata.cloud), as their free tier comfortably covers most Boundless use cases. The SDK also supports [S3](/developers/tutorials/request#s3) and [GCS](/developers/tutorials/request#google-cloud-storage-gcs).
5959

6060
Before submitting a request, you'll need to:
6161

@@ -73,7 +73,8 @@ export PINATA_JWT="abcdef..."
7373
let client = Client::builder()
7474
.with_rpc_url(args.rpc_url)
7575
.with_private_key(args.private_key)
76-
.with_storage_provider(Some(storage_provider_from_env()?))
76+
.with_uploader_config(&args.storage_config)
77+
.await?
7778
.build()
7879
.await?;
7980
```
@@ -111,50 +112,76 @@ This will store the `journal` and `seal` from the Boundless market, together the
111112

112113
### Storage Providers
113114

114-
The Boundless Market SDK automatically configures the storage provider based on environment variables; it supports both IPFS and S3 for uploading programs and inputs.
115+
The Boundless Market SDK supports multiple storage backends for uploading programs and inputs: **IPFS (Pinata)**, **S3**, and **Google Cloud Storage (GCS)**. The SDK uses `StorageUploaderConfig` with clap, so the storage backend is configured via environment variables or CLI flags.
115116

116-
#### IPFS
117+
#### IPFS (Pinata)
117118

118-
For example, if you set the following:
119+
To use Pinata for IPFS uploads, set the following environment variable:
119120

120121
```bash
121-
export PINATA_JWT="abcdef"...
122-
```
123-
124-
then when you use `.with_storage_provider()`:
125-
126-
```rust
127-
let client = Client::builder()
128-
.with_rpc_url(args.rpc_url)
129-
.with_private_key(args.private_key)
130-
.with_storage_provider(Some(storage_provider_from_env()?)) // [!code hl] // [!code focus]
131-
.build()
132-
.await?;
122+
export PINATA_JWT="abcdef..."
133123
```
134124

135-
_IPFS_ is set automatically to the storage provider, and your JWT will be used to upload programs/inputs via Pinata's gateway.
125+
The SDK picks the storage backend based on which env vars are set. When `PINATA_JWT` is set, it uses Pinata to upload programs and inputs to IPFS.
136126

137127
#### S3
138128

139-
To use S3 as your storage provider, you need to set the following environment variables:
129+
To use S3 as your storage backend, set the following environment variables:
140130

141131
```bash
142-
export S3_ACCESS_KEY="abcdef..."
143-
export S3_SECRET_KEY="abcdef..."
144-
export S3_BUCKET="bucket-name..."
145-
export S3_URL="https://bucket-url..."
146-
export AWS_REGION="us-east-1"
132+
export S3_BUCKET="bucket-name"
133+
export S3_URL="https://s3.us-east-1.amazonaws.com" # optional, for S3-compatible services
134+
export AWS_ACCESS_KEY_ID="abcdef..." # optional, uses AWS default credential chain if not set
135+
export AWS_SECRET_ACCESS_KEY="abcdef..." # optional, uses AWS default credential chain if not set
136+
export AWS_REGION="us-east-1" # optional, can be inferred from environment
147137
```
148138

149139
Once these are set, this will automatically use the specified [AWS S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html) for storage of programs and inputs.
150140

151141
<Warning>
152-
The SDK generates S3 presigned URLs that expire after 12 hours. If your request takes longer to fulfill, provers cannot download your program or inputs after expiry. For long-running requests, use IPFS storage or set `S3_NO_PRESIGNED=1` to use direct S3 URLs with appropriate bucket policies.
142+
By default, the SDK generates S3 presigned URLs that expire after 12 hours. If your request takes longer to fulfill, provers cannot download your program or inputs after expiry. For long-running requests, you have a few options:
143+
144+
- Use IPFS storage instead
145+
- Set `S3_PUBLIC_URL=true` to return public HTTPS URLs (requires a public bucket)
146+
- Set `S3_PRESIGNED=false` to use direct S3 URLs with appropriate bucket policies
153147
</Warning>
154148

149+
#### Google Cloud Storage (GCS)
150+
151+
<Note>
152+
GCS support requires the `gcs` feature flag: `cargo add boundless-market --features gcs`
153+
</Note>
154+
155+
To use Google Cloud Storage, set the following environment variables:
156+
157+
```bash
158+
export GCS_BUCKET="your-bucket-name"
159+
```
160+
161+
**Authentication** is resolved via the [Google Cloud Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials) chain:
162+
163+
1. `GOOGLE_APPLICATION_CREDENTIALS` environment variable pointing to a service account JSON key file
164+
2. Well-known file locations (`~/.config/gcloud/application_default_credentials.json`, set up via `gcloud auth application-default login`)
165+
3. Workload Identity on GKE, metadata server on Compute Engine, etc.
166+
167+
You can also provide credentials directly via `GCS_CREDENTIALS_JSON` when loading from a secrets manager without writing to disk.
168+
169+
**Configuration:**
170+
171+
| Environment Variable | Description |
172+
|---|---|
173+
| `GCS_BUCKET` | **(Required)** GCS bucket name |
174+
| `GCS_URL` | Custom endpoint URL (for emulators like `fake-gcs-server`) |
175+
| `GCS_CREDENTIALS_JSON` | Service account JSON string (bypasses ADC) |
176+
| `GCS_PUBLIC_URL` | Set to `true` to return public HTTPS URLs (`https://storage.googleapis.com/{bucket}/{key}`) instead of `gs://` URLs. Requires the bucket to be publicly readable. |
177+
178+
<Tip>
179+
For public buckets, set `GCS_PUBLIC_URL=true` so provers can download via standard HTTPS without needing GCS credentials. After each upload, a HEAD request verifies the object is publicly accessible.
180+
</Tip>
181+
155182
#### No Storage Provider
156183

157-
A perfectly valid option for `StorageProvider` is `None`; if you don't set any relevant environment variables for IPFS/S3, it won't use a storage provider to upload programs or inputs at runtime. This means you will need to upload your program ahead of time, and provide the public URL. For the inputs, you can also pass them inline (i.e. in the transaction) if they are small enough. Otherwise, you can upload inputs ahead of time as well.
184+
If you don't set any storage-related environment variables, no storage backend is configured. This means you will need to upload your program ahead of time, and provide the public URL. For the inputs, you can also pass them inline (i.e. in the transaction) if they are small enough. Otherwise, you can upload inputs ahead of time as well.
158185

159186
### Uploading Programs
160187

@@ -164,7 +191,8 @@ Provers must be able to access your guest program via a publicly accessible URL;
164191

165192
```rust
166193
let client = Client::builder()
167-
.with_storage_provider(Some(storage_provider_from_env()?))
194+
.with_uploader_config(&args.storage_config)
195+
.await?
168196
.build()
169197
.await?;
170198
let program_url = client.upload_program(program).await?;
@@ -380,7 +408,8 @@ Use `config_offer_layer` when you want to adjust how the SDK calculates auction
380408
let client = Client::builder()
381409
.with_rpc_url(args.rpc_url)
382410
.with_private_key(args.private_key)
383-
.with_storage_provider(Some(storage_provider_from_env()?))
411+
.with_uploader_config(&args.storage_config)
412+
.await?
384413
.config_offer_layer(|config| config
385414
// Set the price per cycle for automatic pricing calculations
386415
.max_price_per_cycle(parse_units("0.1", "gwei").unwrap())
@@ -417,7 +446,8 @@ The funding mode can be configured when building the client using `with_funding_
417446
let client = Client::builder()
418447
.with_rpc_url(args.rpc_url)
419448
.with_private_key(args.private_key)
420-
.with_storage_provider(Some(storage_provider_from_env()?))
449+
.with_uploader_config(&args.storage_config)
450+
.await?
421451
.with_funding_mode(FundingMode::Always) // [!code hl] // [!code focus]
422452
.build()
423453
.await?;

developers/tutorials/sensitive-inputs.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -199,12 +199,12 @@ With your inputs now sitting privately in S3, you may now [request a proof](/dev
199199
If you have already uploaded your inputs using the `aws` CLI above, you can skip the information below. Otherwise, if you are interested in using the Boundless SDK to upload your inputs to your S3 bucket, you will need to:
200200
201201
- make sure your AWS credentials are set in environment variables, specifically:
202-
- `S3_ACCESS` for the access key
203-
- `S3_SECRET` for the secret key
202+
- `AWS_ACCESS_KEY_ID` for the access key (optional if using the AWS default credential chain)
203+
- `AWS_SECRET_ACCESS_KEY` for the secret key (optional if using the AWS default credential chain)
204204
- `S3_BUCKET` for the bucket name of the bucket created in [Create the S3 bucket](/developers/tutorials/sensitive-inputs#1-create-the-s3-bucket)
205-
- `S3_URL` for the bucket URL of the bucket created in [Create the S3 bucket](/developers/tutorials/sensitive-inputs#1-create-the-s3-bucket)
206-
- `AWS_REGION` for the bucket region.
207-
- and last, but not least, make sure `S3_NO_PRESIGNED=1`
205+
- `S3_URL` for the bucket endpoint URL of the bucket created in [Create the S3 bucket](/developers/tutorials/sensitive-inputs#1-create-the-s3-bucket)
206+
- `AWS_REGION` for the bucket region
207+
- and last, but not least, make sure `S3_PRESIGNED=false` to use direct S3 URLs
208208
209209
After this setup, you may request a proof programmatically as [Request a Proof](/developers/tutorials/request) recommends; your inputs will be automatically uploaded to your gated S3 bucket, however remember that you still need to go through all the necessary gating policies as laid out in this tutorial to make sure your inputs are private and only available to select provers.
210210
@@ -213,7 +213,7 @@ If you're interested in doing a one-off test, take a look at the [requestor modu
213213
214214
<Check>
215215
Relevant Links:
216-
[StorageProvider](https://docs.rs/boundless-market/latest/boundless_market/storage/trait.StorageProvider.html), [storage_provider_from_env](https://docs.rs/boundless-market/latest/boundless_market/storage/fn.storage_provider_from_env.html).
216+
[StorageUploader](https://docs.rs/boundless-market/latest/boundless_market/storage/trait.StorageUploader.html), [StorageUploaderConfig](https://docs.rs/boundless-market/latest/boundless_market/storage/struct.StorageUploaderConfig.html).
217217
</Check>
218218
219219
## Prover

0 commit comments

Comments
 (0)