build: fix generation of large .vdi images
authorAdones Pitogo <[email protected]>
Tue, 11 Jul 2023 05:31:50 +0000 (13:31 +0800)
committerChristian Lamparter <[email protected]>
Sat, 15 Jul 2023 15:02:42 +0000 (17:02 +0200)
Instead of loading the whole image into the memory when generating the
sha256 sum, we load the file in chunks and update the hash incrementally
to avoid MemoryError in python. Also remove a stray empty line.

Fixes: #13056
Signed-off-by: Adones Pitogo <[email protected]>
(mention empty line removal, adds Fixes from PR)
Signed-off-by: Christian Lamparter <[email protected]>
scripts/json_add_image_info.py

index 3aeb7ba5fc4b509c876524f61a72f1e753ce2848..915e5f61812578ec9e4e92c5aead2da190a7d8b5 100755 (executable)
@@ -13,7 +13,6 @@ if len(argv) != 2:
 json_path = Path(argv[1])
 file_path = Path(getenv("FILE_DIR")) / getenv("FILE_NAME")
 
-
 if not file_path.is_file():
     print("Skip JSON creation for non existing file", file_path)
     exit(0)
@@ -37,7 +36,14 @@ def get_titles():
 
 
 device_id = getenv("DEVICE_ID")
-hash_file = hashlib.sha256(file_path.read_bytes()).hexdigest()
+
+sha256_hash = hashlib.sha256()
+with open(str(file_path),"rb") as f:
+    # Read and update hash string value in blocks of 4K
+    for byte_block in iter(lambda: f.read(4096),b""):
+        sha256_hash.update(byte_block)
+
+hash_file = sha256_hash.hexdigest()
 
 if file_path.with_suffix(file_path.suffix + ".sha256sum").exists():
     hash_unsigned = (