-
Notifications
You must be signed in to change notification settings - Fork 815
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overhead calculator #95
Comments
Hi @X-Ryl669, sorry for the late response, it's taken me a bit to catch up on open issues. This is the current minimums:
So if you had 10 files stored in root, that would take: This is useful for most uses of embedded filesystems when files are smaller than the block size. The full formula that handles when directory blocks/file blocks overflow is a bit more complicated. The full details can be found in the SPEC.md. But if you want a direct formula it would be something like this: Currently this isn't great when there are very few blocks (for example internal flash). But works ok for most forms of external flash. Most notably, small files always take up a full block at minimum. As a sidenote: I'm currently working on this as a part of the 2.0 work which will get rid of the 2-block superblock requirement and allow small files to be inlined in their parent's directory blocks: #85 |
Thanks a lot! |
@geky for the v2 could you show what formulas one could use to calculate overhead? |
@geky is it still the same overhead for v2.x ? |
Hi @ExtremeGTX, @perigoso, sorry about missing this earlier. This has changed in v2, it's improved in some ways but also became a bit more complicated:
The file data structure is the same, so if Metadata get a bit more complicated. Each directory still needs at minimum one pair of metadata blocks, and each piece of metadata has a 4x overhead (2x for block pairs, 2x to avoid exponential runoff): Where the The So, at the time of writing, this is roughly: Some other things to note:
For these reasons I would suggest some amount of extra storage to avoid near-full issues, something in the range of 1.5x-2x. These extra blocks will also contribute to dynamic wear-leveling, extending the lifetime of the device, so they're not really wasted in a sense. Hopefully this info helps. |
What about the variable skip block section of each block. This varies with block number and basically grows with size of file? |
@kyrreaa, great question! It's super unintuitive, but because the number of variable pointers forms a perfectly balanced binary tree, on average the overhead never exceeds Since our word size We can work through a couple examples to see this is true:
One way to prove this is to look at each row of pointers. Ignoring the first block, the first row has This gives us a geometric series that converges to |
Yeah I figured the number of pointers based on the block number outwards counting trailing zeroes in block number and add one. Since it is a bit convoluted the cost of the "actual size on disk" stat would be too high I guess. |
Hi @geky File system Size = 65536 bytes Available_space = 64K - 2*4K (overhead(1x SuperBlock + 1x RootDir)) = 56K
Max_Size_per_log_file = 56K / 2 = 28672 bytes
Available_size_per_file = 28672 - 1*4KB (overhead (file_metadata)) = 24576
LogFileName = log.0000 (8 bytes) But this calculation gives me error, no space left on the file system unless I subtract 28 bytes more from I assume these 28 bytes are Thank you. |
Hi @shahawi-sumup, those 28-bytes come from the CTZ skip-list overhead: littlefs carves out 8 bytes for each block for its own bookkeeping (the details are a bit more complicated, see above). That being said, littlefs doesn't reserve a block for each file-metadata, the file's metadata is stored inside the parent directory. So you shouldn't need the Two things come to mind:
In general I would avoid trying to fill up littlefs completely. The copy-on-write system is kind of complex and can result in ENOSPC in strange situations like what you're seeing. I would save at least ~10% of storage for copy-on-write operations. Though in your case, at 16 blocks, saving 2 blocks (or maybe even just 1 if |
This is being referenced externally, and a bit incorrectly (unfortunately the exact constraints are complicated), so I just wanted to clarify: littlefs can fit in 2 blocks, as of v2. Iff all files are "inlineable", which requires:
Inline files are more expensive, using ~4x the storage, but this is better than storing small files in full blocks. If you have a small amount of storage, say, 4 blocks, you may also want to consider using a larger block size, |
Can you provide some formula to figure out the overhead used for the filesystem compared to the data stored ?
Something with page size, and block size as input and the Y * (number of file of size XXX) ?
Thanks
The text was updated successfully, but these errors were encountered: