Skip to content
This repository was archived by the owner on Jun 28, 2023. It is now read-only.

Commit 1f7788f

Browse files
committed
Periodic update - 2023-01-27
1 parent c2815d4 commit 1f7788f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+691
-121
lines changed

doc_source/DeleteMarker.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ If you make an API call on an object whose current version is a delete marker an
2020
+ A 405 \(Method Not Allowed\) error
2121
+ A response header, `x-amz-delete-marker: true`
2222

23-
The response header tells you that the object accessed was a delete marker\. This response header never returns `false`\. If the value is `false`, Amazon S3 does not include this response header in the response\.
23+
The response header tells you that the object accessed was a delete marker\. This response header never returns `false`, because when the value is `false`, Amazon S3 does not include this response header in the response\.
2424

2525
The following figure shows how a simple `GET` on an object whose current version is a delete marker, returns a 404 No Object Found error\.
2626

doc_source/ListingKeysUsingAPIs.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -237,6 +237,27 @@ func (basics BucketBasics) ListObjects(bucketName string) ([]types.Object, error
237237
return val/1024;
238238
}
239239
```
240+
List objects using pagination\.
241+
242+
```
243+
public static void listBucketObjects(S3Client s3, String bucketName ) {
244+
try {
245+
ListObjectsV2Request listReq = ListObjectsV2Request.builder()
246+
.bucket(bucketName)
247+
.maxKeys(1)
248+
.build();
249+
250+
ListObjectsV2Iterable listRes = s3.listObjectsV2Paginator(listReq);
251+
listRes.stream()
252+
.flatMap(r -> r.contents().stream())
253+
.forEach(content -> System.out.println(" Key: " + content.key() + " size = " + content.size()));
254+
255+
} catch (S3Exception e) {
256+
System.err.println(e.awsErrorDetails().errorMessage());
257+
System.exit(1);
258+
}
259+
}
260+
```
240261
+ For API details, see [ListObjects](https://docs.aws.amazon.com/goto/SdkForJavaV2/s3-2006-03-01/ListObjects) in *AWS SDK for Java 2\.x API Reference*\.
241262

242263
------

doc_source/MrapFailover.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ If you have S3 Cross\-Region Replication \(CRR\) enabled with two\-way replicati
1111
## Amazon S3 Multi\-Region Access Points routing states<a name="FailoverConfiguration"></a>
1212

1313
Your Amazon S3 Multi\-Region Access Points failover configuration determines the routing status of the AWS Regions that are used with the Multi\-Region Access Point\. You can configure your Amazon S3 Multi\-Region Access Point to be in an active\-active state or active\-passive state\.
14-
+ **Active\-active** – In an active\-active configuration, all requests are automatically sent to the closest proximity AWS Region in your Multi\-Region Access Point\. After the Multi\-Region Access Point has been configured to be in an active\-active state, all Regions can receive traffic\. If a Region goes down in an active\-active configuration, traffic will be automatically redirected to one of the active Regions\.
14+
+ **Active\-active** – In an active\-active configuration, all requests are automatically sent to the closest proximity AWS Region in your Multi\-Region Access Point\. After the Multi\-Region Access Point has been configured to be in an active\-active state, all Regions can receive traffic\. If traffic disruption occurs in an active\-active configuration, network traffic will automatically be redirected to one of the active Regions\.
1515
+ **Active\-passive** – In an active\-passive configuration, the active Regions in your Multi\-Region Access Point receive traffic and the passive ones do not\. If you intend to use S3 failover controls to initiate failover in a disaster situation, set up your Multi\-Region Access Points in an active\-passive configuration while you're testing and performing disaster\-recovery planning\.
1616

1717
## AWS Region support<a name="RegionSupport"></a>

doc_source/MrapOperations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The following examples demonstrate how to use Multi\-Region Access Points with c
1212

1313
## Multi\-Region Access Point compatibility with AWS services<a name="mrap-api-support"></a>
1414

15-
Amazon S3 Multi\-Region Access Point Amazon Resource Names \(ARNs\) allow any application that requires an Amazon S3 bucket name to use a Multi\-Region Access Point\. You can use Amazon S3 Multi\-Region Access Point aliases anywhere that you use S3 bucket names to access data\.
15+
Amazon S3 Multi\-Region Access Point Amazon Resource Names \(ARNs\) allow applications \(using an AWS SDK\) that require an Amazon S3 bucket name to use a Multi\-Region Access Point\.
1616

1717
## Multi\-Region Access Point compatibility with S3 operations<a name="mrap-operations-support"></a>
1818

doc_source/PresignedUrlUploadObject.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -580,18 +580,18 @@ import {
580580
PutObjectCommand,
581581
GetObjectCommand,
582582
DeleteObjectCommand,
583-
DeleteBucketCommand }
584-
from "@aws-sdk/client-s3";
583+
DeleteBucketCommand,
584+
} from "@aws-sdk/client-s3";
585585
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates an Amazon S3 service client module.
586586
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
587-
const fetch = require("node-fetch");
587+
import fetch from "node-fetch";
588588
589589
// Set parameters
590590
// Create a random names for the S3 bucket and key.
591591
export const bucketParams = {
592592
Bucket: `test-bucket-${Math.ceil(Math.random() * 10 ** 10)}`,
593593
Key: `test-object-${Math.ceil(Math.random() * 10 ** 10)}`,
594-
Body: "BODY"
594+
Body: "BODY",
595595
};
596596
597597
export const run = async () => {
@@ -601,7 +601,6 @@ export const run = async () => {
601601
const data = await s3Client.send(
602602
new CreateBucketCommand({ Bucket: bucketParams.Bucket })
603603
);
604-
return data; // For unit tests.
605604
console.log(`Waiting for "${bucketParams.Bucket}" bucket creation...\n`);
606605
} catch (err) {
607606
console.log("Error creating bucket", err);
@@ -616,7 +615,6 @@ export const run = async () => {
616615
Body: bucketParams.Body,
617616
})
618617
);
619-
return data; // For unit tests.
620618
} catch (err) {
621619
console.log("Error putting object", err);
622620
}
@@ -644,19 +642,23 @@ export const run = async () => {
644642
try {
645643
console.log(`\nDeleting object "${bucketParams.Key}"} from bucket`);
646644
const data = await s3Client.send(
647-
new DeleteObjectCommand({ Bucket: bucketParams.Bucket, Key: bucketParams.Key })
645+
new DeleteObjectCommand({
646+
Bucket: bucketParams.Bucket,
647+
Key: bucketParams.Key,
648+
})
648649
);
649-
return data; // For unit tests.
650650
} catch (err) {
651651
console.log("Error deleting object", err);
652652
}
653653
// Delete the S3 bucket.
654654
try {
655655
console.log(`\nDeleting bucket ${bucketParams.Bucket}`);
656656
const data = await s3Client.send(
657-
new DeleteBucketCommand({ Bucket: bucketParams.Bucket, Key: bucketParams.Key })
657+
new DeleteBucketCommand({
658+
Bucket: bucketParams.Bucket,
659+
Key: bucketParams.Key,
660+
})
658661
);
659-
return data; // For unit tests.
660662
} catch (err) {
661663
console.log("Error deleting object", err);
662664
}

doc_source/ShareObjectPreSignedURL.md

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -172,6 +172,105 @@ namespace Amazon.DocSamples.S3
172172
}
173173
```
174174

175+
------
176+
#### [ JavaScript ]
177+
178+
**Example**
179+
The following example creates a bucket, puts an object, creates a signed url for that object, fetches the url, then deletes the object and the bucket\.
180+
181+
```
182+
// Import the required AWS SDK clients and commands for Node.js
183+
import {
184+
CreateBucketCommand,
185+
PutObjectCommand,
186+
GetObjectCommand,
187+
DeleteObjectCommand,
188+
DeleteBucketCommand,
189+
} from "@aws-sdk/client-s3";
190+
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates an Amazon S3 service client module.
191+
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
192+
import fetch from "node-fetch";
193+
194+
// Set parameters
195+
// Create a random names for the S3 bucket and key.
196+
export const bucketParams = {
197+
Bucket: `test-bucket-${Math.ceil(Math.random() * 10 ** 10)}`,
198+
Key: `test-object-${Math.ceil(Math.random() * 10 ** 10)}`,
199+
Body: "BODY",
200+
};
201+
202+
export const run = async () => {
203+
// Create an S3 bucket.
204+
try {
205+
console.log(`Creating bucket ${bucketParams.Bucket}`);
206+
const data = await s3Client.send(
207+
new CreateBucketCommand({ Bucket: bucketParams.Bucket })
208+
);
209+
console.log(`Waiting for "${bucketParams.Bucket}" bucket creation...\n`);
210+
} catch (err) {
211+
console.log("Error creating bucket", err);
212+
}
213+
// Put the object in the S3 bucket.
214+
try {
215+
console.log(`Putting object "${bucketParams.Key}" in bucket`);
216+
const data = await s3Client.send(
217+
new PutObjectCommand({
218+
Bucket: bucketParams.Bucket,
219+
Key: bucketParams.Key,
220+
Body: bucketParams.Body,
221+
})
222+
);
223+
} catch (err) {
224+
console.log("Error putting object", err);
225+
}
226+
// Create a presigned URL.
227+
try {
228+
// Create the command.
229+
const command = new GetObjectCommand(bucketParams);
230+
231+
// Create the presigned URL.
232+
const signedUrl = await getSignedUrl(s3Client, command, {
233+
expiresIn: 3600,
234+
});
235+
console.log(
236+
`\nGetting "${bucketParams.Key}" using signedUrl with body "${bucketParams.Body}" in v3`
237+
);
238+
console.log(signedUrl);
239+
const response = await fetch(signedUrl);
240+
console.log(
241+
`\nResponse returned by signed URL: ${await response.text()}\n`
242+
);
243+
} catch (err) {
244+
console.log("Error creating presigned URL", err);
245+
}
246+
// Delete the object.
247+
try {
248+
console.log(`\nDeleting object "${bucketParams.Key}"} from bucket`);
249+
const data = await s3Client.send(
250+
new DeleteObjectCommand({
251+
Bucket: bucketParams.Bucket,
252+
Key: bucketParams.Key,
253+
})
254+
);
255+
} catch (err) {
256+
console.log("Error deleting object", err);
257+
}
258+
// Delete the S3 bucket.
259+
try {
260+
console.log(`\nDeleting bucket ${bucketParams.Bucket}`);
261+
const data = await s3Client.send(
262+
new DeleteBucketCommand({
263+
Bucket: bucketParams.Bucket,
264+
Key: bucketParams.Key,
265+
})
266+
);
267+
} catch (err) {
268+
console.log("Error deleting object", err);
269+
}
270+
};
271+
run();
272+
```
273+
175274
------
176275
#### [ PHP ]
177276

doc_source/UsingEncryption.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Protecting data using encryption<a name="UsingEncryption"></a>
22

33
**Important**
4-
Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\.
4+
Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs, S3 Inventory, and S3 Storage Lens\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\.
55

66
Data protection refers to protecting data while in\-transit \(as it travels to and from Amazon S3\) and at rest \(while it is stored on disks in Amazon S3 data centers\)\. You can protect data in transit using Secure Socket Layer/Transport Layer Security \(SSL/TLS\) or client\-side encryption\. You have the following options for protecting data at rest in Amazon S3:
77
+ **Server\-Side Encryption** – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects\.

doc_source/UsingKMSEncryption.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Using server\-side encryption with AWS Key Management Service \(SSE\-KMS\)<a name="UsingKMSEncryption"></a>
22

33
**Important**
4-
Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\.
4+
Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs, S3 Inventory, and S3 Storage Lens\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\.
55

66
Server\-side encryption is the encryption of data at its destination by the application or service that receives it\. AWS Key Management Service \(AWS KMS\) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud\. Amazon S3 uses server\-side encryption with AWS KMS \(SSE\-KMS\) to encrypt your S3 object data\. Also, when SSE\-KMS is requested for the object, the S3 checksum as part of the object's metadata, is stored in encrypted form\. For more information about checksum, see [Checking object integrity](checking-object-integrity.md)\.
77

@@ -55,7 +55,7 @@ When you request that your data be decrypted, Amazon S3 and AWS KMS perform the
5555

5656
1. Amazon S3 sends the encrypted data key to AWS KMS in a `Decrypt` request\.
5757

58-
1. AWS KMS decrypts the data key by using the same KMS key and returns the plaintext data key to Amazon S3\.
58+
1. AWS KMS decrypts the encrypted data key by using the same KMS key and returns the plaintext data key to Amazon S3\.
5959

6060
1. Amazon S3 decrypts the encrypted data, using the plaintext data key, and removes the plaintext data key from memory as soon as possible\.
6161

@@ -154,4 +154,4 @@ If your object uses SSE\-KMS, don't send encryption request headers for `GET` re
154154
+ [Encryption context](#encryption-context)
155155
+ [Sending requests for AWS KMS encrypted objects](#aws-signature-version-4-sse-kms)
156156
+ [Specifying server\-side encryption with AWS KMS \(SSE\-KMS\)](specifying-kms-encryption.md)
157-
+ [Reducing the cost of SSE\-KMS with Amazon S3 Bucket Keys](bucket-key.md)
157+
+ [Reducing the cost of SSE\-KMS with Amazon S3 Bucket Keys](bucket-key.md)

doc_source/UsingServerSideEncryption.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Using server\-side encryption with Amazon S3\-managed encryption keys \(SSE\-S3\)<a name="UsingServerSideEncryption"></a>
22

33
**Important**
4-
Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console, S3 Inventory, S3 Storage Lens, and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\.
4+
Amazon S3 now applies server\-side encryption with Amazon S3 managed keys \(SSE\-S3\) as the base level of encryption for every bucket in Amazon S3\. Starting January 5, 2023, all new object uploads to Amazon S3 will be automatically encrypted at no additional cost and with no impact on performance\. Currently, the automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in AWS CloudTrail logs, S3 Inventory, and S3 Storage Lens\. During the next few weeks, the automatic encryption status will also be rolled out to the Amazon S3 console and as an additional Amazon S3 API response header in the AWS Command Line Interface and AWS SDKs\. When this update is complete in all AWS Regions, we will update the documentation\. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html)\.
55

66
Server\-side encryption protects data at rest\. Amazon S3 encrypts each object with a unique key\. As an additional safeguard, it encrypts the key itself with a key that it rotates regularly\. Amazon S3 server\-side encryption uses one of the strongest block ciphers available to encrypt your data, 256\-bit Advanced Encryption Standard \(AES\-256\)\.
77

doc_source/access-control-block-public-access.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -60,13 +60,11 @@ Note that it isn't currently possible to change an access point's block public a
6060

6161
## The meaning of "public"<a name="access-control-block-public-access-policy-status"></a>
6262

63-
### Buckets<a name="access-control-block-public-access-policy-status-buckets"></a>
64-
65-
#### ACLs<a name="public-acls"></a>
63+
### ACLs<a name="public-acls"></a>
6664

6765
Amazon S3 considers a bucket or object ACL public if it grants any permissions to members of the predefined `AllUsers` or `AuthenticatedUsers` groups\. For more information about predefined groups, see [Amazon S3 predefined groups](acl-overview.md#specifying-grantee-predefined-groups)\.
6866

69-
#### Bucket policies<a name="public-bucket-policies"></a>
67+
### Bucket policies<a name="public-bucket-policies"></a>
7068

7169
When evaluating a bucket policy, Amazon S3 begins by assuming that the policy is public\. It then evaluates the policy to determine whether it qualifies as non\-public\. To be considered non\-public, a bucket policy must grant access only to fixed values \(values that don't contain a wildcard or [an AWS Identity and Access Management Policy Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html)\) for one or more of the following:
7270
+ An AWS principal, user, role, or service principal \(e\.g\. `aws:PrincipalOrgID`\)
@@ -129,7 +127,7 @@ You can make these policies non\-public by including any of the condition keys l
129127
}
130128
```
131129

132-
#### Example<a name="access-control-block-public-access-policy-example"></a>
130+
### How Amazon S3 evaluates a bucket policy that contains both public and non\-public access grants<a name="access-control-block-public-access-policy-example"></a>
133131

134132
This example shows how Amazon S3 evaluates a bucket policy that contains both public and non\-public access grants\.
135133

0 commit comments

Comments
 (0)