Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented Cascade storage method in upload functionality #476

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .autodoc/docs/markdown/src/config/data.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
[View code on GitHub](https://github.com/metaplex-foundation/sugar/src/config/data.rs)

The `sugar` code defines the configuration and data structures for a project that deals with non-fungible tokens (NFTs) and programmable non-fungible tokens (pNFTs). The main structure, `ConfigData`, contains various fields related to the token standard, asset properties, creator information, and storage configurations for different platforms like AWS, NFT.Storage, Shadow Drive, and Pinata.
The `sugar` code defines the configuration and data structures for a project that deals with non-fungible tokens (NFTs) and programmable non-fungible tokens (pNFTs). The main structure, `ConfigData`, contains various fields related to the token standard, asset properties, creator information, and storage configurations for different platforms like AWS, NFT.Storage, Shadow Drive, Pinata, Cascade and Sense.

The `SugarConfig` struct holds the keypair and RPC URL for the Solana network, while `SolanaConfig` contains the JSON RPC URL, keypair path, and commitment level. The `AwsConfig` and `PinataConfig` structs store the respective platform-specific configurations.
The `SugarConfig` struct holds the keypair and RPC URL for the Solana network, while `SolanaConfig` contains the JSON RPC URL, keypair path, and commitment level. The `AwsConfig` and `PinataConfig` structs store the respective platform-specific configurations. `cascade_api_key`, `sense_api_key` store api key to access Pastel's Cascade and Sense Protocol.

The `Creator` struct represents a creator with an address and share percentage. The `Cluster` enum represents different Solana network clusters (Devnet, Mainnet, Localnet, and Unknown). The `TokenStandard` enum distinguishes between NFT and pNFT standards.

Expand All @@ -16,7 +16,7 @@ These structures and utility functions can be used throughout the project to man

2. **What are the different `UploadMethod` options available and how do they affect the behavior of the code?**

The `UploadMethod` enum has five variants: `Bundlr`, `AWS`, `NftStorage`, `SHDW`, and `Pinata`. These options represent different storage services or methods for uploading assets. The choice of `UploadMethod` will determine which storage service or method is used when uploading assets in the project.
The `UploadMethod` enum has six variants: `Bundlr`, `AWS`, `NftStorage`, `SHDW`, `Pinata`, `Cascade` and `Sense`. These options represent different storage services or methods for uploading assets. The choice of `UploadMethod` will determine which storage service or method is used when uploading assets in the project.

3. **How does the `TokenStandard` enum work and what are its possible values?**

Expand Down
48 changes: 48 additions & 0 deletions .autodoc/docs/markdown/src/upload/methods/cascade.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
[View code on GitHub](https://github.com/metaplex-foundation/sugar/src/upload/methods/cascade.rs)

The code in this file is responsible for uploading files to the Pastel's Cascade service. It defines the `CascadeStorageMethod` struct and implements the `Prepare` and `Uploader` traits for it. The main purpose of this code is to handle the process of uploading files to NFT Storage while adhering to the service's limitations, such as file size and request rate limits.

The `CascadeStorageMethod` struct contains an `Arc<Client>` for making HTTP requests. The `new` method initializes the struct by creating an HTTP client with the necessary headers, including the authentication token.

The `prepare` method, which is part of the `Prepare` trait implementation, checks if any file in the provided asset pairs exceeds the 100MB file size limit. If any file is too large, an error is returned.

The `upload` method, which is part of the `Uploader` trait implementation, is responsible for uploading the files to Cascade Protocol. It first groups the files into batches, ensuring that each batch does not exceed the file size and count limits. Then, it iterates through the batches and uploads them using a multipart HTTP request. If the upload is successful, the cache is updated with the new file URLs and active registration id of Cascade, and the progress bar is incremented. If an error occurs during the upload, it is added to a list of errors that is returned at the end of the method. The

To avoid hitting the rate limit, the code waits for a specified duration (`REQUEST_WAIT`) between uploading batches. Additionally, an `interrupted` flag is used to stop the upload process if needed.

Here's an example of how this code might be used in the larger project:

```rust
let config_data = ConfigData::load("config.toml")?;
let cascade_storage_method = CascadeStorageMethod::new(&config_data).await?;

let sugar_config = SugarConfig::load("sugar_config.toml")?;
let asset_pairs = load_asset_pairs(&sugar_config)?;
let asset_indices = get_asset_indices(&asset_pairs)?;

cascade_storage_method.prepare(&sugar_config, &asset_pairs, asset_indices).await?;

let mut cache = Cache::load("cache.toml")?;
let mut assets = prepare_assets(&asset_pairs, &cache)?;
let progress = ProgressBar::new(assets.len() as u64);
let interrupted = Arc::new(AtomicBool::new(false));

let errors = cascade_storage_method
.upload(&sugar_config, &mut cache, DataType::Image, &mut assets, &progress, interrupted)
.await?;
```

This example demonstrates how to initialize the `CascadeStorageMethod`, prepare the assets for upload, and then upload them using the `upload` method.
## Questions:
1. **Question**: What is the purpose of the `CascadeStorageMethod` struct and its associated methods?
**Answer**: The `CascadeStorageMethod` struct is used to handle the interaction with the Cascade Protocol API. It provides methods for initializing a new instance with the necessary authentication, preparing the assets for upload by checking file size limits, and uploading the assets to the Cascade Protocol API.

2. **Question**: What are the constants defined at the beginning of the code and what are their purposes?
**Answer**: The constants defined at the beginning of the code are:
- `CASCADE_STORAGE_API_URL`: The base URL for the Cascade Protocol API.
- `REQUEST_WAIT`: The time window (in milliseconds) to wait between requests to avoid rate limits.
- `FILE_SIZE_LIMIT`: The maximum file size allowed for upload (100 MB).
- `FILE_COUNT_LIMIT`: The maximum number of files allowed per request.

3. **Question**: How does the `upload` method handle uploading assets in batches?
**Answer**: The `upload` method first groups the assets into batches based on the file size and count limits. It then iterates through each batch, creating a multipart form with the assets, and sends a POST request to the Cascade Protocol API. After each successful upload, the cache is updated, and the progress bar is incremented. If there are more batches to process, the method waits for a specified duration to avoid rate limits before proceeding with the next batch.
6 changes: 5 additions & 1 deletion .autodoc/docs/markdown/src/upload/methods/mod.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,15 @@ This code is part of a larger project and serves as a module that provides vario
shdw::sync_data("source-storage", "destination-storage");
```

6. **cascade** This sub-module provides integration with Cascade protocol. Cascade is a protocol that allows users to store data permanently in a highly redundant, distributed fashion with a single upfront fee. It contain functions to upload and manage IPFS content and also adds cascade id to the metadata to get TxID of the Action Registration ticket.

7. **sense** This sub-module provides integration with Sense protocol. Sense is a lightweight protocol on the Pastel Network, built to assess the relative rareness of a given NFT against near-duplicate meta-data. It contain functions to upload and manage IPFS content and also adds sense id to the metadata to get TxID of the Action Registration ticket.

By using `pub use` statements, the code re-exports the contents of each sub-module, making their functions and types available to other parts of the project without the need to explicitly import each sub-module.
## Questions:
1. **What is the purpose of each module in this code?**

Each module (aws, bundlr, nft_storage, pinata, and shdw) likely represents a different component or service within the Sugar project, but it's not clear from this code snippet alone what each module does specifically.
Each module (aws, bundlr, nft_storage, pinata, shdw, cascade and sense) likely represents a different component or service within the Sugar project, but it's not clear from this code snippet alone what each module does specifically.

2. **How are these modules being used in the rest of the project?**

Expand Down
48 changes: 48 additions & 0 deletions .autodoc/docs/markdown/src/upload/methods/sense.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
[View code on GitHub](https://github.com/metaplex-foundation/sugar/src/upload/methods/sense.rs)

The code in this file is responsible for uploading files to the Pastel's Sense service. It defines the `SenseStorageMethod` struct and implements the `Prepare` and `Uploader` traits for it. The main purpose of this code is to handle the process of uploading files to NFT Storage while adhering to the service's limitations, such as file size and request rate limits.

The `SenseStorageMethod` struct contains an `Arc<Client>` for making HTTP requests. The `new` method initializes the struct by creating an HTTP client with the necessary headers, including the authentication token.

The `prepare` method, which is part of the `Prepare` trait implementation, checks if any file in the provided asset pairs exceeds the 100MB file size limit. If any file is too large, an error is returned.

The `upload` method, which is part of the `Uploader` trait implementation, is responsible for uploading the files to Sense Protocol. It first groups the files into batches, ensuring that each batch does not exceed the file size and count limits. Then, it iterates through the batches and uploads them using a multipart HTTP request. If the upload is successful, the cache is updated with the new file URLs and active registration id of Sense, and the progress bar is incremented. If an error occurs during the upload, it is added to a list of errors that is returned at the end of the method. The

To avoid hitting the rate limit, the code waits for a specified duration (`REQUEST_WAIT`) between uploading batches. Additionally, an `interrupted` flag is used to stop the upload process if needed.

Here's an example of how this code might be used in the larger project:

```rust
let config_data = ConfigData::load("config.toml")?;
let sense_storage_method = SenseStorageMethod::new(&config_data).await?;

let sugar_config = SugarConfig::load("sugar_config.toml")?;
let asset_pairs = load_asset_pairs(&sugar_config)?;
let asset_indices = get_asset_indices(&asset_pairs)?;

sense_storage_method.prepare(&sugar_config, &asset_pairs, asset_indices).await?;

let mut cache = Cache::load("cache.toml")?;
let mut assets = prepare_assets(&asset_pairs, &cache)?;
let progress = ProgressBar::new(assets.len() as u64);
let interrupted = Arc::new(AtomicBool::new(false));

let errors = sense_storage_method
.upload(&sugar_config, &mut cache, DataType::Image, &mut assets, &progress, interrupted)
.await?;
```

This example demonstrates how to initialize the `SenseStorageMethod`, prepare the assets for upload, and then upload them using the `upload` method.
## Questions:
1. **Question**: What is the purpose of the `SenseStorageMethod` struct and its associated methods?
**Answer**: The `SenseStorageMethod` struct is used to handle the interaction with the Sense Protocol API. It provides methods for initializing a new instance with the necessary authentication, preparing the assets for upload by checking file size limits, and uploading the assets to the Sense Protocol API.

2. **Question**: What are the constants defined at the beginning of the code and what are their purposes?
**Answer**: The constants defined at the beginning of the code are:
- `SENSE_STORAGE_API_URL`: The base URL for the Sense Protocol API.
- `REQUEST_WAIT`: The time window (in milliseconds) to wait between requests to avoid rate limits.
- `FILE_SIZE_LIMIT`: The maximum file size allowed for upload (100 MB).
- `FILE_COUNT_LIMIT`: The maximum number of files allowed per request.

3. **Question**: How does the `upload` method handle uploading assets in batches?
**Answer**: The `upload` method first groups the assets into batches based on the file size and count limits. It then iterates through each batch, creating a multipart form with the assets, and sends a POST request to the Sense Protocol API. After each successful upload, the cache is updated, and the progress bar is incremented. If there are more batches to process, the method waits for a specified duration to avoid rate limits before proceeding with the next batch.
2 changes: 1 addition & 1 deletion .autodoc/docs/markdown/src/upload/methods/summary.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ let (asset_id, uploaded_url) = upload_handle.await??;

Similarly, the `bundlr.rs` file provides a module for uploading assets to the Bundlr platform using the Solana blockchain. The `BundlrMethod` struct handles the upload process, including setting up the Bundlr client, funding the Bundlr address, and uploading the assets. An example usage is provided in the file summary.

The `nft_storage.rs` file handles uploading files to the NFT Storage service, while the `pinata.rs` file provides functionality for uploading files to the Pinata IPFS service. Both files define structs that implement the `Prepare` and `Uploader` or `ParallelUploader` traits, respectively. Example usages for these modules can be found in their respective file summaries.
The `nft_storage.rs` file handles uploading files to the NFT Storage service, while the `pinata.rs` file provides functionality for uploading files to the Pinata IPFS service and also `cascade.rs` abd `sense.rs` files provides uploading and generate `cascade_id`, `sense_id` (which is uploading result_id to get active registration TxId) using Pastel's Cascade and Sense protocol and add that to metadata. Those files define structs that implement the `Prepare` and `Uploader` or `ParallelUploader` traits, respectively. Example usages for these modules can be found in their respective file summaries.

Finally, the `shdw.rs` file is responsible for handling the storage and uploading of assets to the Shadow Drive, a decentralized storage solution. It provides a `SHDWMethod` struct that implements the `Prepare` and `ParallelUploader` traits. An example usage is provided in the file summary.

Expand Down
2 changes: 1 addition & 1 deletion .autodoc/docs/markdown/src/upload/summary.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The code in the `upload` folder is responsible for managing and uploading assets

For example, the `assets.rs` file provides functions to manage assets, calculate their sizes, and update their metadata. The `errors.rs` file defines a custom error type called `UploadError` for handling various errors that may occur during the upload process. The `process.rs` file is responsible for uploading assets to a storage system, while the `uploader.rs` file handles the uploading of assets and defines traits and structs for managing the upload process.

The `methods` subfolder contains code for handling the upload of assets to different storage services and platforms, such as Amazon S3, Bundlr, NFT Storage, Pinata IPFS, and Shadow Drive. Each storage method is implemented in a separate file, providing a clean and modular approach to integrating various storage services into the larger project.
The `methods` subfolder contains code for handling the upload of assets to different storage services and platforms, such as Amazon S3, Bundlr, NFT Storage, Pinata IPFS, Shadow Drive, Cascade and Sense. Each storage method is implemented in a separate file, providing a clean and modular approach to integrating various storage services into the larger project.

Here's an example of how the code in the `upload` folder might be used in the larger project:

Expand Down
2 changes: 1 addition & 1 deletion .autodoc/docs/markdown/src/upload/uploader.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ This code would initialize an uploader object based on the configuration, prepar
**Answer:** The `ParallelUploader` trait is designed for upload methods that support parallel uploads. It abstracts the threading logic, allowing methods to focus on the logic of uploading a single asset. It inherits from the `Uploader` trait and requires implementing the `upload_asset` function, which returns a `JoinHandle` for the task responsible for uploading the specified asset.

3. **Question:** How does the `initialize` function work and what is its purpose?
**Answer:** The `initialize` function acts as a factory function for creating uploader objects based on the configuration's `uploadMethod`. It takes `sugar_config` and `config_data` as arguments and returns a `Result` containing a boxed `Uploader` trait object. Depending on the `uploadMethod`, it initializes the appropriate uploader object (e.g., `AWSMethod`, `BundlrMethod`, `NftStorageMethod`, `SHDWMethod`, or `PinataMethod`).
**Answer:** The `initialize` function acts as a factory function for creating uploader objects based on the configuration's `uploadMethod`. It takes `sugar_config` and `config_data` as arguments and returns a `Result` containing a boxed `Uploader` trait object. Depending on the `uploadMethod`, it initializes the appropriate uploader object (e.g., `AWSMethod`, `BundlrMethod`, `NftStorageMethod`, `SHDWMethod`, `PinataMethod`, `CascadeStorageMethod` or `SenseStorageMethod`).
6 changes: 6 additions & 0 deletions src/cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,12 @@ pub struct CacheItem {
pub animation_hash: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub animation_link: Option<String>,

#[serde(skip_serializing_if = "Option::is_none")]
pub cascade_id: Option<String>,

#[serde(skip_serializing_if = "Option::is_none")]
pub sense_id: Option<String>,
}

impl CacheItem {
Expand Down
12 changes: 12 additions & 0 deletions src/config/data.rs
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,14 @@ pub struct ConfigData {
// Pinata specific configuration
pub pinata_config: Option<PinataConfig>,

// Cascade specific configuration
#[serde(serialize_with = "to_option_string")]
pub cascade_api_key: Option<String>,

// Sense specific configuration
#[serde(serialize_with = "to_option_string")]
pub sense_api_key: Option<String>,

/// Hidden setttings
pub hidden_settings: Option<HiddenSettings>,

Expand Down Expand Up @@ -228,6 +236,10 @@ pub enum UploadMethod {
Pinata,
#[serde(rename = "sdrive")]
Sdrive,
#[serde(rename = "cascade")]
Cascade,
#[serde(rename = "sense")]
Sense,
}

impl Display for UploadMethod {
Expand Down
31 changes: 30 additions & 1 deletion src/create_config/process.rs
Original file line number Diff line number Diff line change
Expand Up @@ -309,7 +309,16 @@ pub fn process_create_config(args: CreateConfigArgs) -> Result<()> {
};

// upload method
let upload_options = vec!["Bundlr", "AWS", "NFT Storage", "SHDW", "Pinata", "SDrive"];
let upload_options = vec![
"Bundlr",
"AWS",
"NFT Storage",
"SHDW",
"Pinata",
"SDrive",
"Cascade",
"Sense"
];
config_data.upload_method = match Select::with_theme(&theme)
.with_prompt("What upload method do you want to use?")
.items(&upload_options)
Expand All @@ -323,6 +332,8 @@ pub fn process_create_config(args: CreateConfigArgs) -> Result<()> {
3 => UploadMethod::SHDW,
4 => UploadMethod::Pinata,
5 => UploadMethod::Sdrive,
6 => UploadMethod::Cascade,
7 => UploadMethod::Sense,
_ => UploadMethod::Bundlr,
};

Expand Down Expand Up @@ -424,6 +435,24 @@ pub fn process_create_config(args: CreateConfigArgs) -> Result<()> {
});
}

if config_data.upload_method == UploadMethod::Cascade {
config_data.cascade_api_key = Some(
Input::with_theme(&theme)
.with_prompt("What is the Cascade api key?")
.interact()
.unwrap(),
);
}


if config_data.upload_method == UploadMethod::Sense {
config_data.sense_api_key = Some(
Input::with_theme(&theme)
.with_prompt("What is the Sense api key?")
.interact()
.unwrap(),
);
}
// is mutable

config_data.is_mutable = Confirm::with_theme(&theme)
Expand Down
1 change: 1 addition & 0 deletions src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ fn setup_logging(level: Option<EnvFilter>) -> Result<()> {
let file = OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.open(log_path)
.unwrap();

Expand Down
Loading