Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(S3): update mentions of S3 in documentation MTA-5188 #3874

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Migration to the new S3 backend (HIVE) for all regions
title: Migration to the new Object Storage backend (HIVE) for all regions
status: changed
author:
fullname: 'Join the #container-registry channel on Slack.'
Expand All @@ -9,4 +9,4 @@ category: containers
product: container-registry
---

All regions were migrated to the new S3 backend (HIVE) and are now using its highly redundant #MultiAZ infrastructure in `FR-PAR`. As a result, almost all recent issues regarding the registry are resolved.
All regions were migrated to the new Object Storage backend (HIVE) and are now using its highly redundant #MultiAZ infrastructure in `FR-PAR`. As a result, almost all recent issues regarding the registry are resolved.
2 changes: 1 addition & 1 deletion components/docs-editor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ At top of `.mdx` file, you MUST add data in frontmatter:

```
---
title: Migration to the new S3 backend (HIVE) for all regions
title: Migration to the new Object Storage backend (HIVE) for all regions
status: changed
author:
fullname: 'Join the #container-registry channel on Slack.'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ More information on the QCOW2 file format, and how to use it can be found in the

1. Create a Scaleway Object Storage bucket.

You need an S3 bucket to export your QCOW2 file into. Any bucket that belongs to the same project as the snapshot can be used. However, if you do not have one already, you can [create it](/storage/object/how-to/create-a-bucket/) in the console.
You need an Object Storage bucket to export your QCOW2 file into. Any bucket that belongs to the same project as the snapshot can be used. However, if you do not have one already, you can [create it](/storage/object/how-to/create-a-bucket/) in the console.
2. Create a snapshot from a volume.

To use this functionality, you must [create a snapshot](/compute/instances/how-to/create-a-snapshot/#how-to-create-a-snapshot) from the volume you want to export.
Expand All @@ -53,7 +53,7 @@ More information on the QCOW2 file format, and how to use it can be found in the
- The secret key of your API key pair (`<secret_key>`)
- The snapshot ID (`<snapshot_uuid>`)
- The name of the Object Storage bucket to store the snapshot (which has to exist in the same Scaleway region as the snapshot)
- A key (can be any acceptable key/object name for Scaleway S3 (suffixing qcow2 images with `.qcow2`))
- A key (can be any acceptable key/object name for Scaleway Object Storage (suffixing qcow2 images with `.qcow2`))

The API returns an output as in the following example:
```json
Expand Down
4 changes: 2 additions & 2 deletions compute/instances/troubleshooting/bootscript-eol.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,10 @@ If your Instance is using the bootscript option to boot in normal mode you are i
</TabsTab>
<TabsTab label="Option 2">

#### Create a snapshot of the volume(s) and export it to S3 to retrieve the data
#### Create a snapshot of the volume(s) and export it to Object Storage to retrieve the data

1. [Create a snapshot](/compute/instances/how-to/create-a-snapshot/) of the volume using the **l_ssd** type of snapshot.
2. [Export](/compute/instances/how-to/snapshot-import-export-feature/) the snapshot to an S3 bucket in the same region as the Instance.
2. [Export](/compute/instances/how-to/snapshot-import-export-feature/) the snapshot to an Object Storage bucket in the same region as the Instance.
3. Retrieve your data from the Object Storage bucket and reuse it at your convenience.
4. Delete the old Instance that was using a bootscript once you have recovered your data.

Expand Down
2 changes: 1 addition & 1 deletion containers/kubernetes/how-to/edit-kosmos-cluster.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ In order to add external nodes to your multi-cloud cluster, you must first [crea
The Kubernetes version of the existing nodes in your multi-cloud pool can be upgraded in place. Your workload will theoretically keep running during the upgrade, but it is best to drain the node before the upgrade.

1. In the Pools section of your Kosmos cluster, click **Upgrade** next to the node pool. This will not cause any of your existing nodes to upgrade, but will instead ensure that any new nodes added to the pool will start up with the newer version.
2. Run the installer program as you would do for a fresh node install, with the additional option `-self-update`. If the option is not available, redownload the program from S3 bucket.
2. Run the installer program as you would do for a fresh node install, with the additional option `-self-update`. If the option is not available, redownload the program from the Object Storage bucket.
3. Now the node will register itself with the Apiserver. Once it is ready, you will see the same node with two kubelet versions. The older node should end up `NotReady` after 5m, you can safely delete it with `kubectl`.
4. Detach the older node in Scaleway API.

Expand Down
8 changes: 4 additions & 4 deletions faq/objectstorage.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
meta:
title: Object Storage FAQ
description: Discover S3 Object Storage.
description: Discover Scaleway Object Storage.
content:
h1: Object Storage
hero: assets/objectstorage.webp
Expand All @@ -13,14 +13,14 @@ category: storage

## What is Object Storage?

Object Storage is a service based on the S3 protocol. It allows you to store any kind of object (documents, images, videos, etc.).
Object Storage is a service based on the Amazon S3 protocol. It allows you to store any kind of object (documents, images, videos, etc.).

Scaleway provides an integrated UI in the [console](https://console.scaleway.com) for convenience. As browsing infinite storage through the web requires some technical trade-offs, some actions are limited in the console for Object Storage:

- batch deletion is limited to 1000 objects.
- empty files are not reported as empty folders.

We provide an S3-compatible API for programmatic access or usage with any compatible software. Therefore, we recommend using dedicated tools such as `s3cmd` to manage large data sets.
We provide an Amazon Amazon S3-compatible API for programmatic access or usage with any compatible software. Therefore, we recommend using dedicated tools such as `s3cmd` to manage large data sets.

## How am I billed for Object Storage?

Expand Down Expand Up @@ -283,4 +283,4 @@ Large objects can be uploaded using [multipart uploads](/storage/object/api-cli/

Yes, a best practice is to create a [lifecycle rule](/storage/object/how-to/manage-lifecycle-rules/) targeting all objects in the bucket, or using a filter with an empty prefix.
In this case, all files contained within the selected bucket will have their storage class altered automatically on the dates stipulated by you.
However, due to S3 Protocol restrictions, a lifecycle rule cannot be created to modify the storage class from Glacier to Standard.
However, due to some restrictions on Amazon's S3 protocol, a lifecycle rule cannot be created to modify the storage class from Glacier to Standard.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ meta:
content:
h1: Using IAM API keys with Object Storage
paragraph: This page explains how to use IAM API keys with Object Storage
tags: API key Projects IAM API-key Preferred-project Object-Storage S3
tags: API key Projects IAM API-key Preferred-project Object-Storage Amazon-S3
dates:
validation: 2024-05-27
posted: 2022-11-02
Expand All @@ -15,7 +15,7 @@ categories:

You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com/), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/).

While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI.
While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI.
<Lightbox src="scaleway-preferred-project.webp" alt="" />

In this document, we explain the concept of preferred Projects for Object Storage, explain how to configure your IAM API key for this, and give some code examples for overriding the preferred Project when making an API call.
Expand All @@ -35,13 +35,13 @@ When you generate an API key with IAM, the key is associated with a specific [IA

## The impact of preferred Projects

When you perform an action on Scaleway Object Storage resources using a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/), you are using tools based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services). This standard interface does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. The preferred Project is specified when creating the API key (or can be edited at a later date).
When you perform an action on Scaleway Object Storage resources using a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/), you are using tools based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services). This standard interface does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. The preferred Project is specified when creating the API key (or can be edited at a later date).

<Message type="important">
Setting the preferred Project does not automatically give the API key bearer permissions for Object Storage in this Project. Ensure that the user/application is either the Owner of the Organization, or has a [policy](/identity-and-access-management/iam/concepts/#policy) giving them appropriate permissions for Object Storage in this Project. Note that the application of Object Storage permissions can take up to 5 minutes after creating a new rule or policy.
</Message>

When using the S3 CLI:
When using the AWS S3 CLI:
- An action of listing the buckets (`aws s3 ls`) will list the buckets of the preferred Project
- An action of creating a bucket (`aws s3 mb`) will create a new bucket inside the preferred Project
- An action of moving an object from a bucket to another (`aws s3 mv source destination`) will only work if the source bucket and the destination buckets are in the preferred Project for an API key
Expand Down
2 changes: 1 addition & 1 deletion identity-and-access-management/iam/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ For each policy rule, you specify one or more permission sets (e.g. "list all In

## Preferred Project

You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/). While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. See our page on [using API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information.
You can carry out actions on Scaleway Object Storage resources either via the [Scaleway console](https://console.scaleway.com), or via a third-party API or CLI, such as [the AWS CLI](/storage/object/api-cli/object-storage-aws-cli/), [MinIOClient](/storage/object/api-cli/installing-minio-client/) or [Rclone](/storage/object/api-cli/installing-rclone/). While the Scaleway console gives you the option to specify the [Scaleway Project](#what-is-a-project) to carry out your Object Storage actions in, this option is not available via third-party API/CLI tools. These tools are based on a [standard Amazon S3 programming interface](https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services), which does not accept Project ID as a parameter. Therefore, when you create a Scaleway API key with IAM, you are prompted to specify the API key's **preferred Project for Object Storage**. This API key will always use this Project when carrying out Object Storage actions via any API/CLI. See our page on [using API keys with Object Storage](/identity-and-access-management/iam/api-cli/using-api-key-object-storage/) for more information.

## Principal

Expand Down
10 changes: 5 additions & 5 deletions managed-services/iot-hub/api-cli/iot-hub-routes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ categories:
- managed-services
dates:
validation: 2024-04-22
tags: iot iot-hub mqtt cli s3cmd s3
tags: iot iot-hub mqtt cli s3cmd amazon-s3
---

Routes are integrations with the Scaleway ecosystem: they can forward MQTT messages to Scaleway services.
Expand All @@ -26,9 +26,9 @@ Routes are integrations with the Scaleway ecosystem: they can forward MQTT messa
- Installed the [Scaleway CLI](https://github.com/scaleway/scaleway-cli#scaleway-cli-v2) and [read the accompanying IoT document](/managed-services/iot-hub/api-cli/getting-started-with-iot-hub-cli/)
- Installed and configured [`s3cmd`](/tutorials/s3cmd/) for Scaleway

## S3 Routes
## Amazon S3 Routes

The S3 route allows you to put the payload of MQTT messages directly into Scaleway's Object Storage.
The Amazon S3 route allows you to put the payload of MQTT messages directly into Scaleway's Object Storage.

<Message type="important">
This section is a continuation of the [Iot Hub CLI quickstart](/managed-services/iot-hub/api-cli/getting-started-with-iot-hub-cli/). Make sure to follow the quickstart before beginning.
Expand All @@ -41,9 +41,9 @@ The S3 route allows you to put the payload of MQTT messages directly into Scalew
PREFIX="iot/messages"
# Create the bucket
s3cmd mb --region "$REGION" "s3://$BUCKET"
# Grant write access to IoT Hub S3 Route Service to your bucket
# Grant write access to IoT Hub Amazon S3 Route Service to your bucket
s3cmd setacl --region "$REGION" "s3://$BUCKET" --acl-grant=write:555c69c3-87d0-4bf8-80f1-99a2f757d031:555c69c3-87d0-4bf8-80f1-99a2f757d031
# Create the IoT Hub S3 Route
# Create the IoT Hub Amazon S3 Route
scw iot route create \
hub-id=$(jq -r '.id' hub.json) \
name=route-s3-cli topic="hello/world" \
Expand Down
2 changes: 1 addition & 1 deletion managed-services/iot-hub/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ Increasing the QoS level decreases message throughput because of the additional

## Routes

IoT Routes forward messages to non publish/subscribe destinations such as databases, REST APIs, Serverless functions and S3 buckets. See [Understanding Routes](/managed-services/iot-hub/reference-content/routes/) for further information.
IoT Routes forward messages to non publish/subscribe destinations such as databases, REST APIs, Serverless functions and Object Storage buckets. See [Understanding Routes](/managed-services/iot-hub/reference-content/routes/) for further information.

## TLS

Expand Down
4 changes: 2 additions & 2 deletions managed-services/iot-hub/how-to/understand-event-messages.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,9 +60,9 @@ This section shows you the types of message that can be received in IoT Hub Even

## Route messages

### S3 route errors
### Amazon S3 route errors
- `"'BUCKET_NAME' s3 bucket write failed. Error HTTP_STATUS_CODE: ERROR_CODE (request-id: REQUEST_ID)"`:
The route failed to write to the specified s3 bucket.
The route failed to write to the specified Object Storage bucket.
`BUCKET_NAME` is the name of the bucket route attempt to write to, `HTTP_STATUS_CODE` and `ERROR_CODE` are standard [S3 error codes](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList)

## Database errors
Expand Down
2 changes: 1 addition & 1 deletion managed-services/iot-hub/reference-content/routes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ content:
excerpt: |
This page provides detailed information about Scaleway IoT Hub Routes.
totalTime: PT5M
tags: iot iot-hub route s3 database postgres postgresql mysql rest api inference
tags: iot iot-hub route amazon-s3 database postgres postgresql mysql rest api inference
dates:
validation: 2024-05-06
posted: 2021-08-31
Expand Down
2 changes: 1 addition & 1 deletion menu/navigation.json
Original file line number Diff line number Diff line change
Expand Up @@ -4397,7 +4397,7 @@
"slug": "optimize-object-storage-performance"
},
{
"label": "Equivalence between S3 actions and IAM permissions",
"label": "Equivalence between Object Storage actions and IAM permissions",
"slug": "s3-iam-permissions-equivalence"
}
],
Expand Down
2 changes: 1 addition & 1 deletion network/load-balancer/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ See [balancing-methods](#balancing-methods).

Routes allow you to specify, for a given frontend, which of its backends it should direct traffic to. For [HTTP](#protocol) frontends/backends, routes are based on HTTP Host headers. For [TCP](#protocol) frontends/backends, they are based on **S**erver **N**ame **I**dentification (SNI). You can configure multiple routes on a single Load Balancer.

## S3 failover
## Object Storage failover

See [customized error page](#customized-error-page)

Expand Down
2 changes: 1 addition & 1 deletion network/load-balancer/how-to/set-up-s3-failover.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ meta:
content:
h1: How to configure a customized error page
paragraph: This page explains how to configure a customized error page for your Load Balancer, using the Scaleway Object Storage Bucket Website feature
tags: s3-failover s3 failover load-balancer object-storage bucket
tags: s3-failover amazon-s3 failover load-balancer object-storage bucket
dates:
validation: 2024-05-26
posted: 2022-02-21
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Benefits of this feature include:
- Providing information on service status or maintenance
- Redirecting to a mirrored site or skeleton site

Note that when entering the S3 link to redirect to, the URL of the bucket endpoint is not sufficient. The bucket website URL is specifically required (e.g.`https://my-bucket.s3-website.nl-ams.scw.cloud`). See our [dedicated documentation](/network/load-balancer/how-to/set-up-s3-failover/) for further help.
Note that when entering the Object Storage link to redirect to, the URL of the bucket endpoint is not sufficient. The bucket website URL is specifically required (e.g.`https://my-bucket.s3-website.nl-ams.scw.cloud`). See our [dedicated documentation](/network/load-balancer/how-to/set-up-s3-failover/) for further help.

## Health checks

Expand Down
2 changes: 1 addition & 1 deletion serverless/functions/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ meta:
label="Read more"
/>
<DefaultCard
title="Transforming images in an S3 bucket using Serverless Functions and Triggers"
title="Transforming images in an Object Storage bucket using Serverless Functions and Triggers"
url="/tutorials/transform-bucket-images-triggers-functions-set-up//"
label="Read more"
/>
Expand Down
Loading