Skip to content

gh-136565: Improve and amend hashlib.__doc__ #136566

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 12, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions Lib/hashlib.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# Licensed to PSF under a Contributor Agreement.
#

__doc__ = """hashlib module - A common interface to many hash functions.
__doc__ = r"""hashlib module - A common interface to many hash functions.

new(name, data=b'', **kwargs) - returns a new hash object implementing the
given hash function; initializing the hash
Expand All @@ -12,7 +12,7 @@
than using new(name):

md5(), sha1(), sha224(), sha256(), sha384(), sha512(), blake2b(), blake2s(),
sha3_224, sha3_256, sha3_384, sha3_512, shake_128, and shake_256.
sha3_224(), sha3_256(), sha3_384(), sha3_512(), shake_128(), and shake_256().

More algorithms may be available on your platform but the above are guaranteed
to exist. See the algorithms_guaranteed and algorithms_available attributes
Expand All @@ -21,8 +21,8 @@
NOTE: If you want the adler32 or crc32 hash functions they are available in
the zlib module.

Choose your hash function wisely. Some have known collision weaknesses.
sha384 and sha512 will be slow on 32 bit platforms.
Choose your hash function wisely. Some have known collision weaknesses,
while others may be slower depending on the CPU architecture.

Hash objects have these methods:
- update(data): Update the hash object with the bytes in data. Repeated calls
Expand All @@ -36,20 +36,20 @@
efficiently compute the digests of data that share a common
initial substring.

For example, to obtain the digest of the byte string 'Nobody inspects the
spammish repetition':
Assuming that Python has been built with MD5 support, the following computes
the MD5 digest of the byte string b'Nobody inspects the spammish repetition':

>>> import hashlib
>>> m = hashlib.md5()
>>> m.update(b"Nobody inspects")
>>> m.update(b" the spammish repetition")
>>> m.digest()
b'\\xbbd\\x9c\\x83\\xdd\\x1e\\xa5\\xc9\\xd9\\xde\\xc9\\xa1\\x8d\\xf0\\xff\\xe9'
b'\xbbd\x9c\x83\xdd\x1e\xa5\xc9\xd9\xde\xc9\xa1\x8d\xf0\xff\xe9'

More condensed:

>>> hashlib.sha224(b"Nobody inspects the spammish repetition").hexdigest()
'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2'
>>> hashlib.md5(b"Nobody inspects the spammish repetition").hexdigest()
'bb649c83dd1ea5c9d9dec9a18df0ffe9'

"""

Expand Down Expand Up @@ -203,7 +203,7 @@ def file_digest(fileobj, digest, /, *, _bufsize=2**18):
*digest* must either be a hash algorithm name as a *str*, a hash
constructor, or a callable that returns a hash object.
"""
# On Linux we could use AF_ALG sockets and sendfile() to archive zero-copy
# On Linux we could use AF_ALG sockets and sendfile() to achieve zero-copy
# hashing with hardware acceleration.
if isinstance(digest, str):
digestobj = new(digest)
Expand Down
Loading