SN 777: rwxrwxrwx

Beep boop - this is a robot. A new show has been posted to TWiT…

What are your thoughts about today’s show? We’d love to hear from you!

I wasn’t able to fully listen live today, so I didn’t catch if the title was explained. I assume it was, but just in case: The episode number is 777. On Linux file permissions are managed with the change mode command chmod. The command can take the permissions as an octal number, where the final 9 bits (3 octal digits) are groups of three representing read, write and execute for three different groups of users: everyone, group, user. If you want to give all three access permissions to all three groups you would do chmod 777 filename. Another way to do this without the octal would be chmod ogu+rwx filename or chmod a+rwx filename .
https://linux.die.net/man/1/chmod

4 Likes

I like Steve’s idea for a dedicated channel through which equipment vendors provide update notifications, but I really don’t think Twitter should be the medium for it. Twitter is on my org’s social network block list, and I don’t think I’d want to mandate that my employees use a social network for business purposes.

There’s something to the idea though. I’d like to see maybe a modern take on the old news ticker-tape machines. I think the Associated Press has some sort of news alerting application that could fit the bill.

Right now it’s a mess of email notification subscriptions from different vendors that all have different ways of encoding the importance of an update and different notification schedules. If there was an application that could aggregate all those notifications, let me make a nice dashboard, and then plug them neatly into our ticketing system, that would be sweet.

2 Likes

I wrote this to post to Steve’s newsgroup on SpinRite, but I thought I would also share it here. I’m not an expert on SSD controllers, but I have an intuitive sense that Steve may be reading more into some of his performance timing than maybe he should be. The Milton I refer to is another participant who used to work on HDD controller firmware in his past.

I feel like Milton may know “way too much more” about this than I do, but I want to discuss SSD performance from the perspective of an intuitive outsider with a software development background.

In the beginning of SSDs I remember people suggesting you would be wasting your money if you didn’t buy a SSD containing a certain specific brand of controller (SandForce if my memory holds.). At that time, this spoke to me about the complexity of managing a collection of flash chips as a coherent unit while offering the expected lifetime, average throughput, wear levelling, and numerous other things, such as TRIM management (although I think it wasn’t yet known as TRIM as that came later, I believe.)

This tells me that there is a lot of “magic” software that makes up the firmware (or maybe it’s as low as built into the hardware of the controller itself.) These days, I think it’s gotten even more complex with error correcting codes and other functions that allow for higher densities. Oh yeah, and lets not forget about multi-level caching and other freaky techniques that are not going to be an aide to “predictable behaviours” wanted for a speed test.

Flash needs to look like a collection of 512 byte (or maybe 4096 byte) sectors to still look to the OS like a disk it would recognize. Internally flash is in blocks, and in order to rewrite even one bit of a block, the ENTIRE block needs to be zapped and then re-written. (This is, in part, why writing flash is slower than reading it.) In looking for specific information on block sizes, I found a technical note from Micron ( TN-29-07: Small-Block vs. Large-Block NAND Flash DevicesIntroduction ) which says:
“Small-block NAND Flash devices contain blocks made up of 32 pages, where each page contains 512 data bytes + 16 spare bytes. Large-block NAND Flash devices contain blocks made up of 64 pages, each page containing 2,048 data bytes + 64 spare bytes. For a 1Gb NAND Flash device, this translates to 8,192 blocks in the small-block organization and 1,024 blocks in the large-block organization.
Small: 1 block = 528 bytes x 32 pages= (16K + 512) bytes
Large: 1 block = (2K + 64) bytes x 64 pages= (128K + 4K) bytes “
These SSD blocks would thus need to contain many of the 512 byte or 4096 byte HDD sized blocks that an LBA would address.

In order to prevent wearing out frequently written blocks (such as the FAT or directory portions of the file system) there is going to be some level of mapping inside the controller to allow it to figure out which SSD block is currently containing which LBAs. This is a bit of a challenging design problem if you stop to think about it. There are many possible designs, but let’s assume most of them need an internal data structure of the SSD controller’s own. If it relocates something, it needs to remember where it put it. It needs to write this into the SSD’s memory as well. It also wants to wear level those writes, an interesting form of recursion… to wear level the info about the wear levelling. It would seem that it now has to somehow know how to find something that is not going to always be stored in a fixed location. One presumes it has some internal RAM and at boot time it can fairly quickly scan the entire SSD to look for a special pattern and load the wear levelling mappings into RAM.

So now the SSD has some data structure in RAM that is needs to “search” when it needs to find a specific LBA. Here’s where my programming background comes in. Perhaps it’s using a hashmap, or a linked list, or a tree, or something even more clever. The efficiency of such a data structure may vary depending on conditions in the drive. (Internal caching can also affect the behaviour of such things.)

Another thing to consider is that many of the drives are probably mostly empty. One presumes unused portions of the drive may not have any relocation information present, to they will be faster (or maybe slower) to locate. If the OS fills LBA addressed sectors mostly from the start of the drive, then that would mean after a certain point, the behaviour of the drive would change from used sectors versus the later as yet unused sectors.

I don’t really have any conclusions to draw, because one would need very specific information about the internal (and likely highly proprietary) workings of specific SSD controllers. I just want to throw out these thoughts to caution that it may not be so easy to make assumptions about the condition of the drive from fine grained timing operations on it—no doubt the SSD controller was designed to meet overall statistical behaviour constraints, and doesn’t have specific behaviour guarantees at fine grained levels.