Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/pr-preview/pr-205/.nojekyll b/pr-preview/pr-205/.nojekyll new file mode 100644 index 00000000..f1731109 --- /dev/null +++ b/pr-preview/pr-205/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/pr-preview/pr-205/404.html b/pr-preview/pr-205/404.html new file mode 100644 index 00000000..8385a07d --- /dev/null +++ b/pr-preview/pr-205/404.html @@ -0,0 +1,271 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +This part of the Walrus documentation is used to publish news and updates about Walrus's +development!
+ +Walrus is an innovative decentralized storage network for blockchain apps and autonomous agents. The +Walrus storage system is being released today as a developer preview for Sui builders in order to +gather feedback. We expect a broad rollout to other web3 communities very soon!
+Leveraging innovations in erasure coding, Walrus enables fast and robust encoding of unstructured +data blobs into smaller slivers distributed and stored over a network of storage nodes. A subset of +slivers can be used to rapidly reconstruct the original blob, even when up to two-thirds of the +slivers are missing. This is possible while keeping the replication factor down to a minimal 4x-5x, +similar to existing cloud-based services, but with the additional benefits of decentralization and +resilience to more widespread faults.
+Sui is the most advanced blockchain system in relation to storage on validators, with innovations +such as a storage fund that future-proofs +the cost of storing data on-chain. Nevertheless, Sui still requires complete data replication among +all validators, resulting in a replication factor of 100x or more in today’s Sui Mainnet. While this +is necessary for replicated computing and smart contracts acting on the state of the blockchain, it +is inefficient for simply storing unstructured data blobs, such as music, video, blockchain history, +etc.
+To tackle the challenge of high replication costs, Mysten Labs has developed Walrus, a decentralized +storage network offering exceptional data availability and robustness with a minimal replication +factor of 4x-5x. Walrus provides two key benefits:
+Cost-Effective Blob Storage: Walrus allows for the uploading of gigabytes of data at a time +with minimal cost, making it an ideal solution for storing large volumes of data. Walrus can do +this because the data blob is transmitted only once over the network, and storage nodes only +spend a fraction of resources compared to the blob size. As a result, the more storage nodes the +system has, the fewer resources each storage node uses per blob.
+High Availability and Robustness: Data stored on Walrus enjoys enhanced reliability and +availability under fault conditions. Data recovery is still possible even if two-thirds of the +storage nodes crash or come under adversarial control. Further, availability may be certified +efficiently without downloading the full blob.
+Decentralized storage can take multiple forms in modern ecosystems. For instance, it offers better +guarantees for digital assets traded as NFTs. Unlike current designs that store data off-chain, +decentralized storage ensures users own the actual resource, not just metadata, mitigating risks of +data being taken down or misrepresented.
+Additionally, decentralized storage is not only useful for storing data such as pictures or files +with high availability; it can also double as a low-cost data availability layer for rollups. Here, +sequencers can upload transactions on Walrus, and the rollup executor only needs to temporarily +reconstruct them for execution.
+We also believe Walrus will accompany existing disaster recovery strategies for millions of +enterprise companies. Not only is Walrus low-cost, it also provides unmatched layers of data +availability, integrity, transparency, and resilience that centralized solutions by design cannot +offer.
+Walrus is powered by the Sui Network and scales horizontally to hundreds or thousands of networked +decentralized storage nodes. This should enable Walrus to offer Exabytes of storage at costs +competitive with current centralized offerings, given the higher assurance and decentralization.
+By releasing this developer preview we hope to share some of the design decisions with the +decentralized app developer community and gather feedback on the approach and the APIs for storing, +retrieving, and certifying blobs. In this developer preview, all storage nodes are operated by +Mysten Labs to help us understand use cases, fix bugs, and improve the performance of the software.
+Future updates to Walrus will allow for dynamically changing the set of decentralized storage nodes, +as well as changing the mapping of what slivers are managed by each storage node. The available +operations and tools will also be expanded to cover more storage-related use cases. Many of these +functions will be designed with the feedback we gather in mind.
+Stay tuned for more updates on how Walrus will revolutionize data storage in the web3 ecosystem.
+As part of this developer preview, we provide a binary client (currently macOS, ubuntu) that can be +operated from the command line interface, a JSON +API, and an HTTP +API. We also offer the community an aggregator and +publisher service and a Devnet deployment of 10 storage nodes operated by Mysten Labs.
+We hope developers will experiment with building applications that leverage the Walrus Decentralized +Store in a variety of ways. As examples, we hope to see the community build:
+Storage of media for NFT or dapps: Walrus can directly store and serve media such as images, +sounds, sprites, videos, other game assets, etc. This is publicly available media that can be +accessed using HTTP requests at caches to create multimedia dapps.
+AI-related use cases: Walrus can store clean data sets of training data, datasets with a known +and verified provenance, model weights, and proofs of correct training for AI models. Or it may be +used to store and ensure the availability and authenticity of an AI model output.
+Storage of long term archival of blockchain history: Walrus can be used as a lower-cost +decentralized store to store blockchain history. For Sui, this can include sequences of +checkpoints with all associated transaction and effects content, as well as historic snapshots of +the blockchain state, code, or binaries.
+Support availability for L2s: Walrus enables parties to certify the availability of blobs, as +required by L2s that need data to be stored and attested as available to all. This may also +include the availability of extra audit data such as validity proofs, zero-knowledge proofs of +correct execution, or large fraud proofs.
+Support a full decentralized web experience: Walrus can host full decentralized web +experiences including all resources (such as js, css, html, and media). These can provide content +but also host the UX of dapps, enabling fully decentralized front- and back-ends on chain. It +brings the full "web" back into "web3".
+Support subscription models for media: Creators can store encrypted media on Walrus and only +provide access via decryption keys to parties that have paid a subscription fee or have paid for +content. (Note that Walrus provides the storage; encryption and decryption must be done off +Walrus).
+We are excited to see what else the web3 developer community can imagine!
+For this developer preview the public Walrus Devnet is openly available to all developers. Developer +documentation is available at https://docs.walrus.site.
+SUI Testnet token is the main currency for interacting with Walrus. Developers pay for Walrus Devnet +storage using SUI Testnet tokens which can be acquired at the Sui Testnet Discord +faucet.
+The Walrus Sites website, the Walrus docs, and +this very blog are hosted on Walrus. To learn more about Walrus Sites +and how you can deploy your own, click here.
+ +Published on: 2024-08-12
+We have redeployed the Walrus Devnet to incorporate various improvements to the Walrus storage nodes +and clients. In this process, all blobs stored on Walrus were wiped. Note that this may happen again +on Devnet and Testnet, but obviously not on the future Mainnet.
+You can obtain the latest version of the walrus
binary and the new configuration as described in
+the setup chapter.
If you had deployed any Walrus Sites, the site object on Sui and any SuiNS name are still valid.
+However, you need to re-store all blobs on Walrus. You can achieve this by running the site-builder
+tool (from the walrus-sites
directory) as follows:
./target/release/site-builder --config site-builder/assets/builder-example.yaml update --force \
+ <path to the site> <site object ID>
+
+Besides many improvements to the storage nodes, the new version of Walrus includes the following +user-facing changes:
+/v1/api
.info
command is now also available in JSON mode.walrus
CLI now has a --version
option.~/.config/walrus
in addition to ~/.walrus
and recognizes the extension .yml
in addition to
+.yaml
.~
symbol at the
+beginning to the user's home directory.store
and blob-status
commands are now more robust
+against Sui full nodes that aggressively prune past events and against load-balancers that send
+transactions to different full nodes.walrus
CLI now properly handles hyphens in blob IDs.This update also increases the number of shards to 1000, which is more representative of the +expected value in Testnet and Mainnet.
+ +In June, Mysten Labs announced Walrus, a new decentralized secure blob store design, and introduced +a developer preview that currently stores over 12TiB of data. +Breaking the Ice gathered over 200 developers to build apps +leveraging decentralized storage.
+It is time to unveil the next stage of the project: Walrus will become an independent decentralized +network with its own utility token, WAL, that will play a key role in the operation and governance +of the network. Walrus will be operated by storage nodes through a delegated proof-of-stake +mechanism using the WAL token. An independent Walrus foundation will encourage the advancement and +adoption of Walrus, and support its community of users and developers.
+Today, we published the Walrus whitepaper (also on +GitHub) that offers +additional details, including:
+The whitepaper focuses on the steady-state design aspects of Walrus. Further details about the +project, such as timelines, opportunities for community participation, how to join the network as a +storage node, and plans around light nodes, will be shared in subsequent posts.
+To be part of this journey:
+Published on: 2024-10-17
+Today, a community of operators launches the first public Walrus Testnet. +This is an important milestone in validating the operation of Walrus as a decentralized blob store, +by operating it on a set of independent storage nodes, that change over time through a delegated +proof of stake mechanism. The Testnet also brings functionality updates relating to governance, +epochs, and blob deletion.
+The most important user-facing new feature is optional blob deletion. The uploader of a blob can +optionally indicate a blob is "deletable". This information is stored in the Sui blob metadata +object, and is also included in the event denoting when the blob is certified. Subsequently, the +owner of the Sui blob metadata object can "delete" it. As a result storage for the remaining +period is reclaimed and can be used by subsequent blob storage operations.
+Blob deletion allows more fine-grained storage cost management: smart contracts that wrap blob +metadata objects can define logic that stores blobs and delete them to minimize costs, and reclaim +storage space before Walrus epochs end.
+However, blob deletion is not an effective privacy mechanism in itself: copies of the blob may exist +outside Walrus storage nodes on caches and end-user stores or devices. Furthermore, if the identical +blob is stored by multiple Walrus users, the blob will still be available on Walrus until no copy +exists. Thus deleting your own copy of a blob cannot guarantee that it is deleted from Walrus as a +whole.
+Walrus Testnet enables multiple epochs. Initially, the epoch duration is set to a single day to +ensure the logic of epoch change is thoroughly tested. At Mainnet, epochs will likely be multiple +weeks long.
+The progress of epochs makes the expiry epoch of blobs meaningful, and blobs will become unavailable +after their expiry epoch. The store command may be used to extend the expiry epoch of a blob that is +still available. This operation is efficient and only affects payments and metadata, and does not +re-upload blob contents.
+Payments for blob storage and extending blob expiry are denominated in Testnet WAL, a +Walrus token issued on the Sui Testnet. Testnet WAL has no value, and an unlimited supply; so no +need to covet or hoard it, it's just for testing purposes and only issued on Sui Testnet.
+WAL also has a smaller unit called FROST, similar to MIST for SUI. 1 WAL is equal to 1 billion +(1000000000) FROST.
+To make Testnet WAL available to all who want to experiment with the Walrus Testnet we provide a +utility and smart contract to convert Testnet SUI (which also has no value) into Testnet WAL using +a one-to-one exchange rate. This is chosen arbitrarily, and generally one should not read too much +into the actual WAL denominated costs of storage on Testnet. They have been chosen arbitrarily.
+Find out how to request Testnet WAL tokens through the CLI.
+The WAL token may also be used to stake with storage operators. Staked WAL can be unstaked and +re-staked with other operators or used to purchase storage.
+Each epoch storage nodes are selected and allocated storage shards according to their delegated +stake. At the end of each epoch payments for storing blobs for the epoch are distributed to storage +nodes and those that delegate stake to them. Furthermore, important network parameters (such as +total available storage and storage price) are set by the selected storage operators each epoch +according to their stake weight.
+A staking web dApp is provided to experiment with this functionality. Community members have also +created explorers that can be used to view storage nodes when considering who to stake with. Staking +ensures that the ultimate governance of Walrus, directly in terms of storage nodes, and indirectly +in terms of parameters and software they chose, rests with WAL Token holders.
+Under the hood and over the next months we will be testing many aspects of epoch changes and +storage node committee changes: better shard allocation mechanisms upon changes or storage node +stake; efficient ways to sync state between storage nodes; as well as better ways for storage nodes +to follow Sui event streams.
+As part of the Testnet release of Walrus, the documentation and Move Smart contracts have been
+updated, and can be found in the walrus-docs
+repository.
With the move to Walrus Testnet, Walrus Sites have also been updated! The new features in this +update greatly increase the flexibility, speed, and security of Walrus Sites. Developers can now +specify client-side routing rules, and add custom HTTP headers to the portals' responses for their +site, expanding the possibilities for what Walrus Sites can do.
+Migrate now to take advantage of these new features! The old Walrus Sites, based on Walrus Devnet, +will still be available for a short time. However, Devnet will be wiped soon (as described below), +so it is recommended to migrate as soon as possible.
+The previous Walrus Devnet instance is now deprecated and will be shut down after 2024-10-31. +All data stored on Walrus Devnet (including Walrus Sites) will no longer be accessible at that +point. You need to re-upload all data to Walrus Testnet if you want it to remain accessible. Walrus +Sites also need to be migrated.
+ +Published on: 2025-01-16
+We have reached a stage in development where it is beneficial to redeploy the Walrus Testnet to +incorporate various improvements that include some backwards-incompatible changes. This redeployment +has happened on 2025-01-16. Make sure to get the latest binary and configuration as described in the +setup section. +Note that all data on the previous Testnet instance has been wiped. All blobs need to be re-uploaded +to the new Testnet instance, including Walrus Sites. In addition, there is a new version of the WAL +token, so your previous WAL tokens will not work anymore. To use the Testnet v2, you need to obtain +new WAL tokens. In the following sections, we describe the notable changes and the actions required +for existing Walrus Sites.
+The epoch duration has been increased from one day to two days to emphasize that this duration is
+different from Sui epochs (at Mainnet, epochs will likely be multiple weeks long). In addition, the
+maximum number of epochs a blob can be stored for has been reduced from 200 to 183 (corresponding
+to one year). The walrus store
command now also supports the --epochs max
flag, which will store
+the blob for the maximum number of epochs.
Besides many improvements to the contracts and the storage-node service, the latest Walrus release +also brings several user-facing improvements.
+walrus store
command now supports storing multiple files at once. This is faster and more
+cost-effective compared to storing each file separately as transactions can be batched through
+PTBs. Notably, this is compatible
+with glob patterns offered by many shells, so you can for example run a command like walrus store *.png --epochs 100
to store all PNG files in the current directory.walrus
CLI now supports creating, funding, and extending shared blobs using the walrus share
, walrus store --share
, and walrus fund-shared-blob
commands. See the shared blobs
+section for more details.Along with the redeployment of Walrus, we have also deployed a new version of the WAL contract. This +means that you cannot use any WAL token from the previous Testnet instance with the new Testnet +instance. You need to request new WAL tokens through the Testnet WAL +faucet.
+One reason for a full redeployment is to allow us to make some changes that are +backwards-incompatible. Many of those are related to the contracts and thus less visible to users. +There are, however, some changes that may affect you.
+The format of the configuration files for storage nodes and clients has been changed. Make sure to +use the latest version of the configuration files, see the configuration +section.
+Several CLI options of the walrus
CLI have been changed. Notably, all "short" variants of options
+(e.g., -e
instead of --epochs
) have been removed to prevent future confusion with new options.
+Additionally, the --epochs
flag is now mandatory for the walrus store
command (this also affects
+the JSON API).
Please refer to the CLI help (walrus --help
, or walrus <command> --help
) for further details.
The paths, request, and response formats of the HTTP APIs have changed for the storage nodes but +also the aggregator and publisher. Please refer to the section on the HTTP +API for further details.
+The Walrus Sites contracts have not changed, which means that all corresponding objects on Sui are
+still valid. However, the resources now point to blob IDs that do not yet exist on the new Testnet.
+The easiest way to fix existing sites is to simply update them with the --force
flag:
site-builder update --epochs <number of epochs> --force <path to site> <existing site object>
+
+As part of the new Testnet release of Walrus, the Move smart contracts have been updated; the
+deployed version can be found in the walrus-docs
+repository.
The key actors in the Walrus architecture are the following:
+Users through clients want to store and read blobs identified by their blob ID.
+These actors are ready to pay for service +when it comes to writes and non-best-effort reads. Users also want to prove +the availability of a blob to third parties without the cost of sending or receiving the full +blob.
+Users might be malicious in various ways: they might not want to pay for services, prove the +availability of unavailable blobs, modify/delete blobs without authorization, try to +exhaust resources of storage nodes, and so on.
+Storage nodes hold one or many shards within a storage epoch.
+Each blob is erasure-encoded into many slivers. Slivers from each stored blob become part +of all shards. A shard at any storage epoch is associated with a storage node that actually stores +all slivers of the shard and is ready to serve them.
+A Sui smart contract controls the assignment of shards to storage nodes within +storage epochs, and Walrus assumes that more than 2/3 of the +shards are managed by correct storage nodes within each storage epoch. This means that Walrus must +tolerate up to 1/3 of the shards managed by Byzantine storage nodes (approximately 1/3 of the +storage nodes being Byzantine) within each storage epoch and across storage epochs.
+All clients and storage nodes operate a blockchain client (specifically on Sui), and mediate +payments, resources (space), mapping of shards to storage nodes, and metadata through blockchain +smart contracts. Users interact with the blockchain to acquire storage resources and upload +certificates for stored blobs. Storage nodes listen to the blockchain events to coordinate +their operations.
+Walrus supports any additional number of optional infrastructure actors that can operate in a +permissionless way:
+Aggregators are clients that reconstruct blobs from slivers and make them available to users +over traditional web2 technologies (such as HTTP). They are optional in that end users may +reconstruct blobs directly or run a local aggregator to perform Walrus reads over web2 +technologies locally.
+Caches are aggregators with additional caching functionality to decrease latency and reduce +load on storage nodes. Such cache infrastructures may also act as CDNs, split the cost of blob +reconstruction over many requests, be better connected, and so on. A client can always verify that +reads from such infrastructures are correct.
+Publishers are clients that help end users store a blob using web2 technologies, +using less bandwidth and custom logic.
+In effect, they receive the blob to be published over traditional web2 protocols (like HTTP) and +run the Walrus store protocol on the end user's behalf. This includes encoding the blob into +slivers, distributing the slivers to storage nodes, collecting storage-node signatures and +aggregating them into a certificate, as well as all other on-chain actions.
+They are optional in that a user can directly interact with Sui and +the storage nodes to store blobs. An end user can always verify that a publisher +performed their duties correctly by checking that an event associated with the +point of availability for the blob exists on chain +and then either performing a read to see if Walrus returns the blob or encoding the blob +and comparing the result to the blob ID in the certificate.
+Aggregators, publishers, and end users are not considered trusted components of the system, and they +might deviate from the protocol arbitrarily. However, some of the security properties of Walrus only +hold for honest end users that use honest intermediaries (caches and publishers). Walrus provides a +means for end users to audit the correct operation of both caches and publishers.
+ +The following list summarizes the basic encoding and cryptographic techniques used in Walrus:
+An erasure code encode algorithm takes a blob, +splits it into a number of symbols, and encodes it into symbols in such a way that a +subset of these symbols can be used to reconstruct the blob.
+Walrus uses a highly efficient erasure code and selects such that a third of symbols can be +used to reconstruct the blob by the decode algorithm.
+The encoding is systematic, meaning that some storage nodes hold part of the original blob, +allowing for fast random-access reads.
+All encoding and decoding operations are deterministic, and encoders have no discretion about it.
+For each blob, multiple symbols are combined into a sliver, which is then assigned to a shard.
+Storage nodes manage one or more shards, and corresponding slivers of each blob are distributed +to all the storage shards.
+The detailed encoding setup results in an expansion of the blob size by a factor of . +This is independent of the number of shards and the number of storage nodes.
+Each blob is also associated with some metadata including a blob ID to allow verification:
+The blob ID is computed as an authenticator of the set of all shard data and metadata (byte size, +encoding, blob hash).
+Walrus hashes a sliver representation in each of the shards and adds the resulting hashes into a +Merkle tree. Then the root of the Merkle tree is the blob hash used to derive the blob ID that +identifies the blob in the system.
+Each storage node can use the blob ID to check if some shard data belongs to a blob using the +authenticated structure corresponding to the blob hash (Merkle tree). A successful check means +that the data is indeed as intended by the writer of the blob.
+As the writer of a blob might have incorrectly encoded a blob (by mistake or on purpose), any +party that reconstructs a blob ID from shard slivers must check that it encodes to the correct +blob ID. The same is necessary when accepting any blob claiming to be a specific blob ID.
+This process involves re-encoding the blob using the erasure code, and deriving the blob ID again +to check that the blob matches. This prevents a malformed blob (incorrectly erasure coded) from +ever being read as a valid blob at any correct recipient.
+A set of slivers equal to the reconstruction threshold belonging to a blob ID that are either +inconsistent or lead to the reconstruction of a different ID represent an incorrect encoding. This +happens only if the user that encoded the blob was faulty or malicious and encoded it incorrectly.
+Walrus can extract one symbol per sliver to form an inconsistency proof. Storage nodes can delete +slivers belonging to inconsistently encoded blobs, and upon request return either the +inconsistency proof or an inconsistency certificate posted on chain.
+In this document, we left out details of the following features:
+Walrus supports operations to store and read blobs, and to prove and verify their availability. +It ensures content survives storage nodes suffering Byzantine faults and remains available and +retrievable. It provides APIs to access the stored content over a CLI, SDKs and over web2 HTTP +technologies, and supports content delivery infrastructures like caches and content distribution +networks (CDNs).
+Under the hood, storage cost is a small fixed multiple of the size of blobs (around 5x). Advanced +erasure coding keeps the cost low, in contrast to the full replication of data traditional to +blockchains, such as the >100x multiple for data stored in Sui objects. As a result, storage of much +bigger resources (up to several GiB) is possible on Walrus at substantially lower cost than on Sui +or other blockchains. Because encoded blobs are stored on all storage nodes, Walrus also provides +superior robustness than designs with a small amount of replicas storing the full blob.
+Walrus uses the Sui chain for coordination and payments. Available storage is represented as Sui +objects that can be acquired, owned, split, merged, and transferred. Storage space can be tied to +a stored blob for a period of time, with the resulting Sui object used to prove +availability on chain in smart contracts, or off chain using light clients.
+The next chapter discusses the above operations relating to storage, +retrieval, and availability in detail.
+In the future, we plan to include in Walrus some minimal governance to allow storage nodes to +change between storage epochs. Walrus is also compatible with periodic payments for continued +storage. We also plan to implement storage attestation based on challenges to get confidence +that blobs are stored or at least available. Walrus also allows light nodes that store small parts +of blobs to get rewards for proving availability and assisting recovery. We will cover these +topics in later documents. We also provide details of the encoding scheme in a separate document.
+There are a few things that Walrus explicitly is not:
+Walrus does not reimplement a CDN that might be geo-replicated or have less than tens of +milliseconds of latency. Instead, it ensures that traditional CDNs are usable and compatible with +Walrus caches.
+Walrus does not re-implement a full smart contracts platform with consensus or execution. It +relies on Sui smart contracts when necessary, to manage Walrus resources and processes including +payments, storage epochs, and so on.
+Walrus supports storage of any blob, including encrypted blobs. However, Walrus itself is not the +distributed key management infrastructure that manages and distributed encryption or decryption +keys to support a full private storage eco-system. It can, however, provide the storage layer for +such infrastructures.
+App builders may use Walrus in conjunction with any L1 or L2 blockchains to build experiences that +require large amounts of data to be stored in a decentralized manner and possibly certified as +available:
+Storage of media for NFT or dApps: Walrus can directly store and serve media such as images, +sounds, sprites, videos, other game assets, and so on. This is publicly available media that is +accessed using HTTP requests at caches to create multimedia dApps.
+AI related use cases: Walrus can store clean data sets of training data, datasets with a +known and verified provenance, models, weights and proofs of correct training for AI models. +It can also store and ensure the availability of an AI model output.
+Storage of long term archival of blockchain history: Walrus can act as a lower-cost +decentralized store to store blockchain history. For Sui, this can include sequences of +checkpoints with all associated transaction and effects content, as well as historic snapshots +of the blockchain state, code, or binaries.
+Support availability for L2s: Walrus allows parties to certify the availability of blobs, as +required by L2s that need data to be stored and be attested as available to all. This may also +include availability of extra audit data such as validity proofs, zero knowledge proofs of +correct execution or large fraud proofs.
+Support a fully decentralized web experience: Walrus can host fully decentralized web +experiences, including all resources (such as js, css, html, media). These can not only +provide content, but also host the UX of dApps to enable applications with fully decentralized +front end and back ends on chain. Walrus puts the full "web" into web3.
+Support subscription models for media: Creators can store encrypted media on Walrus and only +provide access via decryption keys to parties that have paid a subscription fee or have paid for +content. Walrus provides the storage, encryption and decryption needs to happen off +the system.
+While Walrus operations happen off Sui, they might interact with the blockchain flows defining the +resource life cycle.
+Systems overview of writes, illustrated in the previous image:
+A user acquires a storage resource of appropriate size and duration on chain, either by directly +buying it on the Walrus system object or a secondary market. A user can split, merge, and +transfer owned storage resources.
+When users want to store a blob, they first erasure code it and compute the +blob ID. Then they can perform the following steps themselves, or use a publisher to perform steps +on their behalf.
+The user goes on chain (Sui) and updates a storage resource to register the blob ID with the +desired size and lifetime. This emits an event, received by storage nodes. After the +user receives they then continue the upload.
+The user sends the blob metadata to all storage nodes and each of the blob slivers to the storage +node that currently manages the corresponding shard.
+A storage node managing a shard receives a sliver and checks it against the blob ID. +It also checks that there is a blob resource with the blob ID that is authorized to store +a blob. If correct, the storage node then signs a statement that it holds the sliver for blob ID +(and metadata) and returns it to the user.
+The user puts together the signatures returned from storage nodes into an availability certificate +and submits it to the chain. When the certificate is verified on chain, an availability event for +the blob ID is emitted, and all other storage nodes seek to download any missing shards for the +blob ID. This event emitted by Sui is the point of availability (PoA) for the +blob ID.
+After the PoA, and without user involvement, storage nodes sync and recover any missing metadata +and slivers.
+The user waits for 2/3 of shard signatures to return to create the certificate of +availability. The rate of the code is below 1/3, allowing for reconstruction even if only 1/3 of +shards return the sliver for a read. Because at most 1/3 of the storage nodes can fail, this ensures +reconstruction if a reader requests slivers from all storage nodes. The full process can +be mediated by a publisher that receives a blob and drives the process to completion.
+Because no content data is required to refresh the duration of storage, refresh is conducted fully +on chain within the protocol. To request an extension to the availability of a blob, a user provides +an appropriate storage resource. Upon success this emits an event that storage nodes receive to +extend the time for which each sliver is stored.
+When a correct storage node tries to reconstruct a sliver for a blob past PoA, +this may fail if the encoding of the blob was incorrect. In this case, the storage node can instead +extract an inconsistency proof for the blob ID. It then uses the proof to create an inconsistency +certificate and upload it on chain.
+The flow is as follows:
+A storage node fails to reconstruct a sliver, and instead computes an inconsistency proof.
+The storage node sends the blob ID and inconsistency proof to all storage nodes of the Walrus +epoch. The storage nodes verify the proof and sign it.
+The storage node who found the inconsistency aggregates the signatures into an inconsistency +certificate and sends it to the Walrus smart contract, which verifies it and emits an inconsistent +resource event.
+Upon receiving an inconsistent resource event, correct storage nodes delete sliver data for the
+blob ID and record in the metadata to return None
for the blob ID for the
+availability period. No storage attestation challenges are issued for this
+blob ID.
A blob ID that is inconsistent always resolves to None
upon reading because
+the read process re-encodes the received blob to check that the blob ID is correctly derived from a
+consistent encoding. This means that an inconsistency proof reveals only a true fact to storage
+nodes (that do not otherwise run decoding), and does not change the output of read in any case.
However, partial reads leveraging the systematic nature of the encoding might successfully return +partial reads for inconsistently encoded files. Thus, if consistency and availability of reads is +important, dApps should do full reads rather than partial reads.
+A user can read stored blobs either directly or through an aggregator/cache. The operations are the +same for direct user access, for aggregators, and caches in case of cache misses. In practice, most +reads happen through caches for blobs that are hot and do not result in requests to storage nodes.
+The reader gets the metadata for the blob ID from any storage node, and authenticates it using +the blob ID.
+The reader then sends a request to the storage nodes for the shards corresponding to the blob ID +and waits for to respond. Sufficient requests are sent in parallel to ensure low latency +for reads.
+The reader authenticates the slivers returned with the blob ID, reconstructs the blob, and decides +whether the contents are a valid blob or inconsistent.
+Optionally, for a cache, the result is cached and can be served without reconstruction until it is +evicted from the cache. Requests for the blob to the cache return the blob contents, or a proof +that the blob is inconsistently encoded.
+During an epoch, a correct storage node challenges all shards to provide symbols for blob slivers +past PoA:
+The list of available blobs for the epoch is determined by the sequence of Sui events up +to the past epoch. Inconsistent blobs are not challenged, and a record proving this status +can be returned instead.
+A challenge sequence is determined by providing a seed to the challenged shard. The sequence is +then computed based both on the seed and the content of each challenged blob ID. This creates +a sequential read dependency.
+The response to the challenge provides the sequence of shard contents for the blob IDs in a +timely manner.
+The challenger node uses thresholds to determine whether the challenge was passed, and reports +the result on chain.
+The challenge/response communication is authenticated.
+Challenges provide some reassurance that the storage node can actually recover shard data in a +probabilistic manner, avoiding storage nodes getting payment without any evidence they might +retrieve shard data. The sequential nature of the challenge and some reasonable timeout also ensures +that the process is timely.
+ +Walrus uses Sui smart contracts to coordinate storage operations as resources that have a lifetime, +and payments. Smart contracts also facilitate governance to determine the storage nodes holding each +storage shard. The following content outlines these operations and refers to them as part of the +read/write paths.
+Metadata is the only blob element ever exposed to Sui or its validators, as the content +of blobs is always stored off chain on Walrus storage nodes and caches. The storage nodes or caches +do not have to overlap with any Sui infrastructure components (such as validators), and the storage +epochs can be of different lengths and not have the same start/end times as Sui epochs.
+A number of Sui smart contracts hold the metadata of the Walrus system and all its entities.
+A Walrus system object holds the committee of storage nodes for the current storage epoch. The +system object also holds the total available space on Walrus and the price per unit of storage (1 +KiB).
+These values are determined by 2/3 agreement between the storage nodes for the storage +epoch. Users can pay to purchase storage space for some time duration. These space resources can +be split, merged, and transferred. Later, they can be used to place a blob ID into Walrus.
+The storage fund holds funds for storing blobs over one or multiple storage epochs. When +purchasing storage space from the system object, users pay into the storage fund separated over +multiple storage epochs. Payments are made each epoch to storage nodes according to performance +(details follow).
+A user acquires some storage through the contracts or transfer and can assign to it a blob ID, +signifying they want to store this blob ID into it. This emits a Move resource event that +storage nodes listen for to expect and authorize off-chain storage operations.
+Eventually a user holds an off-chain availability certificate from storage nodes for a blob +ID. The user uploads the certificate on chain to signal that the blob ID is available for an +availability period. The certificate is checked against the latest Walrus committee, +and an availability event is emitted for the blob ID if correct. This is the point of +availability for the blob.
+At a later time, a certified blob's storage can be extended by adding a storage object to it +with a longer expiry period. This facility can be used by smart contracts to extend the +availability of blobs stored in perpetuity as long as funds exist to continue providing storage.
+In case a blob ID is not correctly encoded, an inconsistency proof certificate can be uploaded
+on chain at a later time. This action emits an inconsistent blob event, signaling that the
+blob ID read results always return None
. This indicates that its slivers can be deleted by
+storage nodes, except for an indicator to return None
.
Users writing to Walrus, need to perform Sui transactions to acquire storage and certify blobs. +Users creating or consuming proofs for attestations of blob availability read the chain +only to prove or verify emission of events. Nodes read +the blockchain to get committee metadata only once per epoch, and then request slivers directly +from storage nodes by blob ID to perform reads on Walrus resources.
+Each Walrus storage epoch is represented by the Walrus system object that contains a storage +committee and various metadata or storage nodes, like the mapping between shards and storage nodes, +available space, and current costs.
+Users can go to the system object for the period and buy some storage amount for one or more +storage epochs. At each storage epoch there is a price for storage, and the payment provided becomes +part of a storage fund for all the storage epochs that span the storage bought. There is a +maximum number of storage epochs in the future for which storage can be bought (approximately 2 +years). Storage is a resource that can be split, merged, and transferred.
+At the end of the storage epoch, part of the funds in the storage fund need to be allocated to +storage nodes. The idea here is for storage nodes to perform light audits of each other, +and suggest which nodes are to be paid based on the performance of these audits.
+ +Walrus operations can be separated in interactions with the Sui chain, which +is used by Walrus for coordination and governance, and off-chain +interactions between clients and storage nodes.
+ +This chapter provides an overview of the architecture and encoding +mechanisms of the Walrus system.
+Use the glossary as a reference for many of the bolded terms used in this +documentation.
+ +The properties below hold true subject to the assumption that for all storage epochs 2/3 of shards +are operated by storage nodes that faithfully and correctly follow the Walrus protocol.
+As described before, each blob is encoded into slivers using an erasure code and a +blob ID is cryptographically derived. For a given blob ID there is a point of availability (PoA) +and an availability period, observable through an event on the Sui chain.
+The following properties relate to the PoA:
+None
.Some assurance properties ensure the correct internal processes of Walrus storage nodes. +For the purposes of defining these, an inconsistency proof proves that a blob ID was +stored by a user that incorrectly encoded a blob.
+None
.Note that there is no delete operation and a blob ID past the PoA will be available for the full +availability period.
+Before the PoA it is the responsibility of a client to ensure the availability of a blob and its +upload to Walrus. After the PoA it is the responsibility of Walrus as a system to maintain the +availability of the blob as part of its operation for the full availability period remaining. +Emission of the event corresponding to the PoA for a blob ID attests its availability.
+From a developer perspective, some Walrus components are objects and smart contracts on +Sui, and some components are Walrus-specific binaries and services. As a rule, Sui is used to +manage blob and storage node metadata, while Walrus-specific services are used to store and +read blob contents, which can be very large.
+Walrus defines a number of objects and smart contracts on Sui:
+The Walrus system object ID can be found in the Walrus client_config.yaml
file (see
+Configuration). You may use any Sui explorer to look at its
+content, as well as explore the content of blob objects. There is more information about these in
+the quick reference to the Walrus Sui structures.
Walrus is also composed of a number of Walrus-specific services and binaries:
+Aggregators, publishers, and other services use the client APIs to interact with Walrus. End users +of services using Walrus interact with the store via custom services, aggregators, or publishers +that expose HTTP APIs to avoid the need to run locally a binary client.
+ +This guide introduces all the concepts needed to build applications that use Walrus as a storage +or availability layer. The overview provides more background and explains +in more detail how Walrus operates internally.
+This developer guide describes the following:
+Refer again to the glossary of terms as a reference.
+The current Testnet release of Walrus and Walrus Sites is a preview intended to showcase +the technology and solicit feedback from builders, users, and storage-node operators. +All transactions are executed on the Sui Testnet and use Testnet WAL and SUI which have no +value. The state of the store can and will be wiped at any point and possibly with no warning. +Do not rely on this Testnet for any production purposes, it comes with no availability or +persistence guarantees.
+Furthermore, encodings and blob IDs may be incompatible with the future Testnet and Mainnet, and +developers will be responsible for migrating any Testnet applications and data to Mainnet. Detailed +migration guides will be provided when Mainnet becomes available.
+Also see the Testnet terms of service under which this Testnet is made +available.
+Walrus stores blobs across storage nodes in an encoded form, and refers +to blobs by their blob ID. The blob ID is deterministically derived from the content of a blob +and the Walrus configuration. The blob ID of two files with the same content will be the same.
+You can derive the blob ID of a file locally using the command: walrus blob-id <file path>
.
Walrus may be used to store a blob, via the native client APIs or a publisher.
+All blobs stored in Walrus are public and discoverable by all. Therefore you must not use Walrus +to store anything that contains secrets or private data without additional measures to protect +confidentiality.
+Under the hood a number of operations happen both on Sui as well as on storage nodes:
+u256
often encoded as a URL-safe base64 string.A blob is considered available on Walrus once the corresponding Sui blob object has been +certified in the final step. The steps involved in a store operation can be executed by the binary +client, or a publisher that accepts and publishes blobs via HTTP.
+Walrus currently allows the storage of blobs up to a maximum size that may be determined
+through the walrus info
CLI command. The
+maximum blob size is currently 13.3 GiB. You may store larger blobs by splitting them into
+smaller chunks.
Blobs are stored for a certain number of epochs, as specified at the time they were stored. Walrus +storage nodes ensure that within these epochs a read succeeds. The current Testnet uses a short +epoch duration of two days for testing purposes, but Mainnet epochs are planned to be multiple +weeks.
+Walrus can also be used to read a blob after it is stored by providing its blob ID. +A read is executed by performing the following steps:
+The steps involved in the read operation are performed by the binary client, or the aggregator +service that exposes an HTTP interface to read blobs. Reads are extremely resilient and will succeed +in recovering the blob in all cases even if up to one-third of storage nodes are unavailable. In +most cases, after synchronization is complete, blob can be read even if two-thirds of storage nodes +are down.
+Walrus can be used to certify the availability of a blob using Sui. Checking that this happened +may currently be done in 3 different ways:
+walrus blob-status
command may be used to identify the event ID that needs to be checked.The underlying protocol of the +Sui light client +returns digitally signed evidence for emitted events +or objects, and can be used by off-line or non-interactive applications as a proof of availability +for the blob ID for a certain number of epochs.
+Once a blob is certified, Walrus will ensure that sufficient slivers will always be +available on storage nodes to recover it within the specified epochs.
+Stored blobs can be optionally set as deletable by the user that creates them. This metadata is +stored in the Sui blob object, and whether a blob is deletable or not is included in certified blob +events. A deletable blob may be deleted by the owner of the blob object, to reclaim and re-use +the storage resource associated with it.
+If no other copies of the blob exist in Walrus, deleting a blob will eventually make it +unrecoverable using read commands. However, if other copies of the blob exist on Walrus, a delete +command will reclaim storage space for the user that invoked it, but will not make the blob +unavailable until all other copies have been deleted or expire.
+ +This section is optional and enables advanced use cases.
+You can interact with Walrus purely through the client CLI, and JSON or HTTP APIs provided, without +querying or executing transactions on Sui directly. However, Walrus uses Sui to manage its metadata +and smart contract developers can read information about the Walrus system, as well as stored blobs, +on Sui.
+The Move code of the Walrus Testnet contracts is available at +https://github.com/MystenLabs/walrus-docs/blob/main/contracts. An example package using +the Walrus contracts is available at +https://github.com/MystenLabs/walrus-docs/blob/main/examples/move.
+The following sections provide further insights into the contract and an overview of how you may use +Walrus objects in your own Sui smart contracts.
+Walrus Mainnet will use new Move packages with struct
layouts and function signatures that may not
+be compatible with this package. Move code that builds against this package will need to be rewritten.
Walrus blobs are represented as Sui objects of type Blob
. A blob is first registered, indicating
+that the storage nodes should expect slivers from a Blob ID to be stored. Then a blob is certified,
+indicating that a sufficient number of slivers have been stored to guarantee the blob's
+availability. When a blob is certified, its certified_epoch
field contains the epoch in which it
+was certified.
A Blob
object is always associated with a Storage
object, reserving enough space for
+a long enough period for the blob's storage. A certified blob is available for the period the
+underlying storage resource guarantees storage.
Concretely, Blob
and Storage
objects have the following fields, which can be read through the
+Sui SDKs:
+/// The blob structure represents a blob that has been registered to with some storage,
+/// and then may eventually be certified as being available in the system.
+public struct Blob has key, store {
+ id: UID,
+ registered_epoch: u32,
+ blob_id: u256,
+ size: u64,
+ encoding_type: u8,
+ // Stores the epoch first certified.
+ certified_epoch: option::Option<u32>,
+ storage: Storage,
+ // Marks if this blob can be deleted.
+ deletable: bool,
+}
+
+/// Reservation for storage for a given period, which is inclusive start, exclusive end.
+public struct Storage has key, store {
+ id: UID,
+ start_epoch: u32,
+ end_epoch: u32,
+ storage_size: u64,
+}
+
+All fields of Blob
and Storage
objects can be read using the expected functions:
// Blob functions
+public fun blob_id(b: &Blob): u256;
+public fun size(b: &Blob): u64;
+public fun erasure_code_type(b: &Blob): u8;
+public fun registered_epoch(self: &Blob): u32;
+public fun certified_epoch(b: &Blob): &Option<u32>;
+public fun storage(b: &Blob): &Storage;
+...
+
+// Storage functions
+public fun start_epoch(self: &Storage): u32;
+public fun end_epoch(self: &Storage): u32;
+public fun storage_size(self: &Storage): u64;
+...
+
+When a blob is first registered, a BlobRegistered
event is emitted that informs storage nodes
+that they should expect slivers associated with its Blob ID. Eventually when the blob is
+certified, a BlobCertified
is emitted containing information about the blob ID and the epoch
+after which the blob will be deleted. Before that epoch the blob is guaranteed to be available.
/// Signals that a blob with metadata has been registered.
+public struct BlobRegistered has copy, drop {
+ epoch: u32,
+ blob_id: u256,
+ size: u64,
+ encoding_type: u8,
+ end_epoch: u32,
+ deletable: bool,
+ // The object id of the related `Blob` object
+ object_id: ID,
+}
+
+/// Signals that a blob is certified.
+public struct BlobCertified has copy, drop {
+ epoch: u32,
+ blob_id: u256,
+ end_epoch: u32,
+ deletable: bool,
+ // The object id of the related `Blob` object
+ object_id: ID,
+ // Marks if this is an extension for explorers, etc.
+ is_extension: bool,
+}
+
+The BlobCertified
event with deletable
set to false and an end_epoch
in the future indicates
+that the blob will be available until this epoch. A light client proof that this event was emitted
+for a blob ID constitutes a proof of availability for the data with this blob ID.
When a deletable blob is deleted, a BlobDeleted
event is emitted:
/// Signals that a blob has been deleted.
+public struct BlobDeleted has copy, drop {
+ epoch: u32,
+ blob_id: u256,
+ end_epoch: u32,
+ // The object ID of the related `Blob` object.
+ object_id: ID,
+ // If the blob object was previously certified.
+ was_certified: bool,
+}
+
+The InvalidBlobID
event is emitted when storage nodes detect an incorrectly encoded blob.
+Anyone attempting a read on such a blob is guaranteed to also detect it as invalid.
/// Signals that a BlobID is invalid.
+public struct InvalidBlobID has copy, drop {
+ epoch: u32, // The epoch in which the blob ID is first registered as invalid
+ blob_id: u256,
+}
+
+System level events such as EpochChangeStart
and EpochChangeDone
indicate transitions
+between epochs. And associated events such as ShardsReceived
, EpochParametersSelected
,
+and ShardRecoveryStart
indicate storage node level events related to epoch transitions,
+shard migrations and epoch parameters.
The Walrus system object contains metadata about the available and used storage, as well as the +price of storage per KiB of storage in FROST. The committee +structure within the system object can be used to read the current epoch number, as well as +information about the committee.
+public struct SystemStateInnerV1 has key, store {
+ id: UID,
+ /// The current committee, with the current epoch.
+ committee: BlsCommittee,
+ // Some accounting
+ total_capacity_size: u64,
+ used_capacity_size: u64,
+ /// The price per unit size of storage.
+ storage_price_per_unit_size: u64,
+ /// The write price per unit size.
+ write_price_per_unit_size: u64,
+ /// Accounting ring buffer for future epochs.
+ future_accounting: FutureAccountingRingBuffer,
+ /// Event blob certification state
+ event_blob_certification_state: EventBlobCertificationState,
+}
+
+/// This represents a BLS signing committee for a given epoch.
+public struct BlsCommittee has store, copy, drop {
+ /// A vector of committee members
+ members: vector<BlsCommitteeMember>,
+ /// The total number of shards held by the committee
+ n_shards: u16,
+ /// The epoch in which the committee is active.
+ epoch: u32,
+}
+
+public struct BlsCommitteeMember has store, copy, drop {
+ public_key: Element<G1>,
+ weight: u16,
+ node_id: ID,
+}
+
+
+
+
+ To make communication as clear and efficient as possible, we make sure to use a single term for +every Walrus entity/concept and do not use any synonyms. The following table lists various +concepts, their canonical name, and how they relate to or differ from other terms.
+Italicized terms in the description indicate other specific Walrus terms contained in the table.
+Approved name | Description |
---|---|
storage node (SN) | entity storing data for Walrus; holds one or several shards |
blob | single unstructured data object stored on Walrus |
permanent blob | blob which cannot be deleted by its owner and is guaranteed to be available until at least its expiry epoch (assuming it is valid) |
deletable blob | blob which can be deleted by its owner at any time to be able to reuse the storage resource |
shard | (disjoint) subset of erasure-encoded data of all blobs; at every point in time, a shard is assigned to and stored on a single SN |
RedStuff | our erasure-encoding approach, which uses two different encodings (primary and secondary) to enable shard recovery; details are available in the whitepaper |
sliver | erasure-encoded data of one shard corresponding to a single blob for one of the two encodings; this contains several erasure-encoded symbols of that blob but not the blob metadata |
sliver pair | the combination of a shard’s primary and secondary sliver |
blob ID | cryptographic ID computed from a blob’s slivers |
blob metadata | metadata of one blob; in particular, this contains a hash per shard to enable the authentication of slivers and recovery symbols |
(end) user | any entity/person that wants to store or read blobs on/from Walrus; can act as a Walrus client itself or use the simple interface exposed by publishers and caches |
publisher | service interacting with Sui and the SNs to store blobs on Walrus; offers a simple HTTP POST endpoint to end users |
aggregator | service that reconstructs blobs by interacting with SNs and exposes a simple HTTP GET endpoint to end users |
cache | an aggregator with additional caching capabilities |
(Walrus) client | entity interacting directly with the SNs; this can be an aggregator/cache, a publisher, or an end user |
(blob) reconstruction | decoding of the primary slivers to obtain the blob; includes re-encoding the blob and checking the Merkle proofs |
(shard/sliver) recovery | process of an SN recovering a sliver or full shard by obtaining recovery symbols from other SNs |
storage attestation | process where SNs exchange challenges and responses to demonstrate that they are storing their currently assigned shards |
certificate of availability (CoA) | a blob ID with signatures of SNs holding at least shards in a specific epoch |
point of availability (PoA) | point in time when a CoA is submitted to Sui and the corresponding blob is guaranteed to be available until its expiration |
inconsistency proof | set of several recovery symbols with their Merkle proofs such that the decoded sliver does not match the corresponding hash; this proves an incorrect/inconsistent encoding by the client |
inconsistency certificate | an aggregated signature from 2/3 of SNs (weighted by their number of shards) that they have seen and stored an inconsistency proof for a blob ID |
storage committee | the set of SNs for a storage epoch, including metadata about the shards they are responsible for and other metadata |
member | an SN that is part of a committee at some epoch |
storage epoch | the epoch for Walrus as distinct to the epoch for Sui |
availability period | the period specified in storage epochs for which a blob is certified to be available on Walrus |
expiry | the end epoch at which a blob is no longer available and can be deleted; the end epoch is always exclusive |
WAL | the native Token of Walrus |
FROST | the smallest unit of WAL (similar to MIST for SUI); 1 WAL is equal to 1 billion (1000000000) FROST |
Welcome to the developer documentation for Walrus, a decentralized storage and data availability +protocol designed specifically for large binary files, or "blobs". Walrus focuses on providing a +robust but affordable solution for storing unstructured content on decentralized storage nodes +while ensuring high availability and reliability even in the presence of Byzantine faults.
+If you are viewing this site at https://docs.walrus.site, you are fetching this from +Walrus behind the scenes. See the Walrus Sites chapter for further +details on how this works.
+The current Testnet release of Walrus and Walrus Sites is a preview intended to showcase +the technology and solicit feedback from builders, users, and storage-node operators. +All transactions are executed on the Sui Testnet and use Testnet WAL and SUI which have no +value. The state of the store can and will be wiped at any point and possibly with no warning. +Do not rely on this Testnet for any production purposes, it comes with no availability or +persistence guarantees.
+Furthermore, encodings and blob IDs may be incompatible with the future Testnet and Mainnet, and +developers will be responsible for migrating any Testnet applications and data to Mainnet. Detailed +migration guides will be provided when Mainnet becomes available.
+Also see the Testnet terms of service under which this Testnet is made +available.
+All blobs stored in Walrus are public and discoverable by all. Therefore you must not use Walrus +to store anything that contains secrets or private data without additional measures to protect +confidentiality.
+Storage and retrieval: Walrus supports storage operations to write and read blobs. It also +allows anyone to prove that a blob has been stored and is available for retrieval at a later +time.
+Cost efficiency: By utilizing advanced erasure coding, Walrus maintains storage costs at +approximately five times the size of the stored blobs, and encoded parts of each blob are stored +on each storage node. This is significantly more cost-effective than traditional full-replication +methods and much more robust against failures than protocols that only store each blob on a subset +of storage nodes.
+Integration with the Sui blockchain: Walrus leverages Sui +for coordination, attesting availability, and payments. Storage space is represented as a resource +on Sui, which can be owned, split, merged, and transferred. Stored blobs are also represented by +objects on Sui, which means that smart contracts can check whether a blob is available and for how +long, extend its lifetime or optionally delete it.
+Epochs, tokenomics, and delegated proof of stake Walrus is operated by a committee of storage +nodes that evolve between epochs. A native token, WAL (and its subdivision FROST, where 1 WAL is +equal to 1 billion FROST), is used to delegate stake to storage nodes, and those with high stake +become part of the epoch committee. The WAL token is also used for payments for storage. At the +end of each epoch, rewards for selecting storage nodes, storing and serving blobs are distributed +to storage nodes and whose that stake with them. All these processes are mediated by smart +contracts on the Sui platform.
+Flexible access: Users can interact with Walrus through a command-line interface (CLI), +software development kits (SDKs), and web2 HTTP technologies. Walrus is designed to work well +with traditional caches and content distribution networks (CDNs), while ensuring all operations +can also be run using local tools to maximize decentralization.
+Walrus's architecture ensures that content remains accessible and retrievable even when many +storage nodes are unavailable or malicious. Under the hood it uses modern error correction +techniques based on fast linear fountain codes, augmented to ensure resilience against Byzantine +faults, and a dynamically changing set of storage nodes. The core of Walrus remains simple, and +storage node management and blob certification leverages Sui smart contracts.
+This documentation is split into several parts:
+Finally, we provide a glossary that explains the terminology used throughout the +documentation.
+This documentation is built using mdBook from source files in +https://github.com/MystenLabs/walrus-docs/. Please report or fix any errors you find in this +documentation in that GitHub project.
+ +