3,067,478 events, 1,483,855 push events, 2,405,090 commit messages, 199,024,492 characters
Use include/exclude instead of whitelist/blacklist (#19325) (#23132)
While looking up some config info for winlogbeat, I came across this whitelist/blacklist section.
I submit this PR because black lives matter, racism is evil, and we should all help out to make the world a tiny bit better for our brothers and sisters of color ✊🏾
(cherry picked from commit 096344e3f1ac837eec9c6229bdb5e44740a1342d)
Co-authored-by: Joost De Cock [email protected]
Fixed a REALLY STUPID BUG\nLIKE WHAT THE FUCK KIND OF SHIT IS THAT\nI am very salty over how much time I wasted on that...\nOh well, it works no ig, and REALLY WELL to! \nDamn, SOL is a fucking lifesaver
RE: [Gen-art] Genart last call review of draft-ietf-roll-unaware-leaves-24
Many Thanks Elwynd:
s6.3, next to last para. s8 and s 12.2: In view of the statement in s6.3: The RPL Root MUST set the 'E' flag to 1 for all rejection and unknown status codes. The status codes in the 1-10 range [RFC8505] are all considered rejections. I think that IANA should be requested to add a column to the EARO status codes registry being modified by s12.2 to add a column that identifies a status code as a rejection or otherwise. Some words in s8 may be appropriate.
Well that would require normative text on the 6LoWPAN part. I guess we can do that at the next iteration of a 6LoWPAN ND specification. For now what we specify is that from the RPL perspective the listed codes denote a failure such that the RPL operation that wraps it cannot happen and that's enough for us.
ED> While I understand that it would be polite to involve 6LoWPAN, WGs don't 'own' RFCs and their associated IANA registries. Since this draft 'needs' the extra information I personally wouldn't see a problem in asking for the extra column. It doesn't break anythng 6LoWPAN are doing AFAICS. Anyway that's not my call... ask your AD. PT> I posted a separate thread on this one.
s7: Given that [EFFICIENT-NPDAO] is still a draft, I think this section should be synchronized with the draft so that we don't end up with one or the other new RFC updating an RFC that doesn't yet exist.
Yes, this was a discussion with Alvaro as well during his AD review and what you see is the outcome. In particular, this is one reason why [EFFICIENT-NPDAO] is referenced normatively.
ED> Hmm. Maybe the rest of the IESG will have something to say about this. PT> Maybe I misunderstand what you mean by synchronize. Would you report the change in NPDAO? PT> trouble is that spec is virtually RFC, stuck in MISSREF in cluster C 310, in particular by this doc.
Abstract: Expand RPL on first use (currently done in s1.) Expand ND.
Done it (relunctantly) for ND. RPL has been used as a noun by people of the art for a long while now. Expending it would turn the abstract in a book.
ED> I know, I know. But it isn't in the RDC Editor's list of well-jnown abbreviations. Sorry!
PT> It’s hard to recognized RPL in its full expansion. I tricked the text to avoid the acronym. “ This specification updates RFC6550, RFC6775, and RFC8505, to provide routing services to IPv6 Nodes that implement RFC6775, RFC8505, and their extensions therein, but do not support RFC6550.
“
s9.2.3, item 1: This would be a useful point to mention that the Target IPv6 address is marked by the F Flag being 1.
Actually it is not. It is set to 0 per the previous section. But the Prefix Length is 128 indicating a host address (not that of the advertiser though, thus the 'F' flag set to 0).
ED> I'll take your word for that! The point I was trying to make was that given you have introduced the F Flag, I think it would be highly desirable to explicitly highlight the point where an implementation would expect to set an F flag as well as places where it isn't set. I thought there would be an opportunity somewhere in s9.2.1. PT> You’re correct, we define the flag here because we change the Target Option but this spec is not the one that really needs it. It was an opportunistic insertion. This information is useful to test the path back when we advertise a prefix. It gives the root an address to ping within the advertised prefix. For Host routes, it’s only an indicator that the node is advertising self vs. another party, which in the case of this spec, is redundant with the ‘External’ flag.
Take care,
Pascal
Gen-art mailing list [email protected] https://www.ietf.org/mailman/listinfo/gen-art
Spooky, scary skeletons
Send shivers down your spine Shrieking skulls will shock your soul Seal your doom tonight Spooky, scary skeletons Speak with such a screech You'll shake and shudder in surprise When you hear these zombies shriek We're sorry skeletons, you're so misunderstood You only want to socialize, but I don't think we should 'Cause spooky, scary skeletons Shout startling, shrilly screams They'll sneak from their sarcophagus And just won't leave you be Spirits supernatural are shy what's all the fuss? But bags of bones seem so unsafe, it's semi-serious Spooky, scary skeletons Are silly all the same They'll smile and scrabble slowly by And drive you so insane Sticks and stones will break your bones They seldom let you snooze Spooky, scary skeletons Will wake you with a boo!
distro/rhel84: use a random uuid for XFS partition
Imagine this situation: You have a RHEL system booted from an image produced by osbuild-composer. On this system, you want to use osbuild-composer to create another image of RHEL.
However, there's currently something funny with partitions:
All RHEL images built by osbuild-composer contain a root xfs partition. The interesting bit is that they all share the same xfs partition UUID. This might sound like a good thing for reproducibility but it has a quirk.
The issue appears when osbuild runs the qemu assembler: it needs to mount all partitions of the future image to copy the OS tree into it.
Imagine that osbuild-composer is running on a system booted from an imaged produced by osbuild-composer. This means that its root xfs partition has this uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
When osbuild-composer builds an image on this system, it runs osbuild that runs the qemu assembler at some point. As I said previously, it will mount all partitions of the future image. That means that it will also try to mount the root xfs partition with this uuid:
efe8afea-c0a8-45dc-8e6e-499279f6fa5d
Do you remember this one? Yeah, it's the same one as before. However, the xfs kernel driver doesn't like that. It contains a global table1 of all xfs partitions that forbids to mount 2 xfs partitions with the same uuid.
I mean... uuids are meant to be unique, right?
This commit changes the way we build RHEL 8.4 images: Each one now has a unique uuid. It's now literally a unique universally unique identifier. haha
-water----normal life -- --single life how long will continue --No mater how sadness of the life remember you are petty unique boy --fight young man the futher may be darkness but don't be fear,remember the beauty will always wait for you in the some which is not far away --2020-12-15
Add files via upload
Context I was looking for an unused and interesting dataset to improve my data science skills on when my professor mentioned the Sloan Digital Sky Survey which offers public data of space observations. As I found the data to be super insightful I want to share the data.
Content The data consists of 10,000 observations of space taken by the SDSS. Every observation is described by 17 feature columns and 1 class column which identifies it to be either a star, galaxy or quasar.
Dataset available
https://raw.githubusercontent.com/dsrscientist/dataset1/master/Skyserver.csv
Feature Description The table results from a query which joins two tables (actuaclly views): "PhotoObj" which contains photometric data and "SpecObj" which contains spectral data.
To ease your start with the data you can read the feature descriptions below:
View "PhotoObj" objid = Object Identifier ra = J2000 Right Ascension (r-band) dec = J2000 Declination (r-band) Right ascension (abbreviated RA) is the angular distance measured eastward along the celestial equator from the Sun at the March equinox to the hour circle of the point above the earth in question. When paired with declination (abbreviated dec), these astronomical coordinates specify the direction of a point on the celestial sphere (traditionally called in English the skies or the sky) in the equatorial coordinate system.
Source: https://en.wikipedia.org/wiki/Right_ascension
u = better of DeV/Exp magnitude fit g = better of DeV/Exp magnitude fit r = better of DeV/Exp magnitude fit i = better of DeV/Exp magnitude fit z = better of DeV/Exp magnitude fit The Thuan-Gunn astronomic magnitude system. u, g, r, i, z represent the response of the 5 bands of the telescope.
Further education: https://www.astro.umd.edu/~ssm/ASTR620/mags.html
run = Run Number rereun = Rerun Number camcol = Camera column field = Field number Run, rerun, camcol and field are features which describe a field within an image taken by the SDSS. A field is basically a part of the entire image corresponding to 2048 by 1489 pixels. A field can be identified by:
run number, which identifies the specific scan, the camera column, or "camcol," a number from 1 to 6, identifying the scanline within the run, and the field number. The field number typically starts at 11 (after an initial rampup time), and can be as large as 800 for particularly long runs. An additional number, rerun, specifies how the image was processed. View "SpecObj" specobjid = Object Identifier class = object class (galaxy, star or quasar object) The class identifies an object to be either a galaxy, star or quasar. This will be the response variable which we will be trying to predict.
redshift = Final Redshift plate = plate number mjd = MJD of observation fiberid = fiber ID In physics, redshift happens when light or other electromagnetic radiation from an object is increased in wavelength, or shifted to the red end of the spectrum.
Each spectroscopic exposure employs a large, thin, circular metal plate that positions optical fibers via holes drilled at the locations of the images in the telescope focal plane. These fibers then feed into the spectrographs. Each plate has a unique serial number, which is called plate in views such as SpecObj in the CAS.
Modified Julian Date, used to indicate the date that a given piece of SDSS data (image or spectrum) was taken.
The SDSS spectrograph uses optical fibers to direct the light at the focal plane from individual objects to the slithead. Each object is assigned a corresponding fiberID.
Further information on SDSS images and their attributes:
http://www.sdss3.org/dr9/imaging/imaging_basics.php
http://www.sdss3.org/dr8/glossary.php
Acknowledgements The data released by the SDSS is under public domain. Its taken from the current data release RD14.
More information about the license:
http://www.sdss.org/science/image-gallery/
It was acquired by querying the CasJobs database which contains all data published by the SDSS.
The exact query can be found at:
http://skyserver.sdss.org/CasJobs/ (Free account is required!)
There are also other ways to get data from the SDSS catalogue. They can be found under:
They really have a huge database which offers the possibility of creating all kinds of tables with respect to personal interests.
Please don't hesitate to contact me regarding any questions or improvement suggestions. :-)
Inspiration The dataset offers plenty of information about space to explore. Also the class column is the perfect target for classification practices!
Add files via upload
Predict A Doctor's Consultation Fee
We have all been in situation where we go to a doctor in emergency and find that the consultation fees are too high. As a data scientist we all should do better. What if you have data that records important details about a doctor and you get to build a model to predict the doctor’s consulting fee.? This is the hackathon that lets you do that.
Size of training set: 5961 records
Size of test set: 1987 records
FEATURES:
Qualification: Qualification and degrees held by the doctor
Experience: Experience of the doctor in number of years
Rating: Rating given by patients
Profile: Type of the doctor
Miscellaeous_Info: Extra information about the doctor
Fees: Fees charged by the doctor
Place: Area and the city where the doctor is located.
https://github.com/dsrscientist/Data-Science-ML-Capstone-Projects
Add files via upload
Predicting Restaurant Food Cost
Who doesn’t love food? All of us must have craving for at least a few favourite food items, we may also have a few places where we like to get them, a restaurant which serves our favourite food the way we want it to be. But there is one factor that will make us reconsider having our favourite food from our favourite restaurant, the cost. Here in this hackathon, you will be predicting the cost of the food served by the restaurants across different cities in India. You will use your Data Science skills to investigate the factors that really affect the cost, and who knows maybe you will even gain some very interesting insights that might help you choose what to eat and from where.
Size of training set: 12,690 records
Size of test set: 4,231 records
Size of training set: 12,690 records
Size of test set: 4,231 records
FEATURES: TITLE: The feature of the restaurant which can help identify what and for whom it is suitable for.
RESTAURANT_ID: A unique ID for each restaurant.
CUISINES: The variety of cuisines that the restaurant offers.
TIME: The open hours of the restaurant.
CITY: The city in which the restaurant is located.
LOCALITY: The locality of the restaurant.
RATING: The average rating of the restaurant by customers.
VOTES: The overall votes received by the restaurant.
COST: The average cost of a two-person meal.
https://github.com/dsrscientist/Data-Science-ML-Capstone-Projects
fixed CRLF bullshit, fucking retard git, a single CRLF sneaks in somehow and then its all fucked from then on. I am sick of how many tools I have to fork, now I need to fix fucking git as well
sketch Half in skvm
Key design points...
A) Take care to not commit too much to Half's range or precision. In particular, don't offer operations whose result might not be representable in Half. E.g., shy away from Half divide, not just because we might not be able to do a native divide, but even more so because division produces results that might not fit in Half, ±inf or NaN.
B) No native Half loads, stores, uniforms or splats, instead converting from I32 or F32. This keeps the entire front-end (user code, Builder, etc.) specified in terms of precise format and oblivious to the various backends' representations of Half. Native Half splats would be less trouble than uniforms I think, and uniforms less trouble than loads and stores, but still enough a pain that we're better of deferring any of that for now. (Explicit fp16 uniforms do make sense to me though.)
C) Keep the current F32-based Color and all the effect virtuals that use it around, introducing parallel Half-based HalfColors and entry points for those. The key cool idea here is to have the default entry points for F32/Color and Half/HalfColor call each other, so that any given effect can implement one, the other, or both and always be compatble with however it's called. This is mostly about incremental rollout, but I suspect we'll have areas that stick to F32 forever. (Think the IEEE 754 32-bit float specific bit hacks we used for approx_log/approx_exp.)
D) (not done yet) allow implicit Half->F32 conversion, but not the other way around of course. This makes it easier to lean on the body of F32 routines we already have, and again mostly helps enable incremental rollout.
Change-Id: I8bb38efbe476ff89dd2591411e115c2ab3757854 Reviewed-on: https://skia-review.googlesource.com/c/skia/+/341800 Reviewed-by: Herb Derby [email protected] Commit-Queue: Mike Klein [email protected]
Huge upgrade, major rewrite/refactor, new features, everything is polished!!!
Note that due to large number of internal changes to code, a separate INTERNAL CODE CHANGES section is at the bottom. Those are changes which in general do not impact what users see that much, but which definitely impact working on and with inxi! They also make errors less likely, and removed many possible bad data error situations.
BUGS:
-
Obscure, but very old Tyan Mobo used a form of dmidecode data for RAM that I'd never gotten a dataset for before, this tripped a series of errors in inxi, which were actually caused by small errors and failures to check certain things, as well as simply never assigning data in corner cases. This system used only dmi handles 6 and 7, which is a very rare setup, from the very early days of dmi data being settled, but it was valid data, and actually inxi was supposed to support it, because I'd never gotten a dataset containing such legacy hardware data, the support didn't work. There were actually several bugs discovered while tracking this down, all were corrected.
-
Going along with the cpu fixes below, there was a bug that if stepping was 0, stepping would not show. I had not realized stepping could be 0, so did a true/false test instead of a defined test, which makes 0 in perl always test as false. This is corrected.
-
While going through code, discovered that missing second argument to main::grabber would have made glabel tool (BSD I think mostly) always fail, without exception. That explains why bsd systems were never getting glabel data, heh.
-
Many null get_size tests would not have worked because they were testing for null array but ('','') was actually being returned, which is not a null array. The testing and results for get_size were quite random, now hey are all the same and consistent, and confirmed correct.
-
In unmounted devices, the match sent to @lsblk to get extended device data would never work with dm-xx type names, failed to translate them to their mapped name, which is what is used in lsblk matches, this is corrected. This could lead to failures to match fs of members of luks, raid, etc, particularly noticeable with complex logical device structures. This means the fallback filters against internal logic volume names, various file system type matches, would always fail.
-
A small host of further bugs found and fixed during the major refactor, but not all of them were noted, they were just fixed, sorry, those will be lost to history unless you compare with diffs the two versions, but that's thousands of lines, but there were more bugs fixed than listed above, just can't remember them all.
FIXES:
-
There was some ambiguity about when inxi falls back to showing hardware graphics driver instead of xorg gfx driver when it can't find an xorg driver. That can happen for instance because of wayland, or because of obscure xorg drivers not yet supported. Now the message is very clear, it says the gfx software driver is n/a, and that it's showing the hardware gfx driver.
-
Big redo of cpu microarch, finally handled cases where same stepping/model ID has two micorarches listed, now that is shown clearly to users, like AMD Zen family 17, model 18, which can be either Zen or Zen+, so now it shows that ambiguity, and a comment: note: check, like it shows for ram report when it's not sure. Shows for instance: arch: Zen/Zen+ note: check in such cases, in other words, it tells users that the naming convention basically changed during the same hardware/die cycle.
-
There were some raid component errors in the unmounted tests which is supposed to test the raid components and remove them from the mounted list. Note that inxi now also tests better if something is a raid component, or an lvm component, or various other things, so unmounted will be right more often now, though it's still not perfect since there are still more unhandled logical storage components that will show as unmounted when tney are parts of logical volumes. Bit by bit!!
-
Part of a significant android fine tuning and fix series, for -P, android uses different default names for partitions, so none showed, now a subset of standard android partitions, like /System, /firmware, etc, shows. Android will never work well though because google keeps locking down key file read/search permissions in /sys and /proc.
-
More ARM device detections, that got tuned quite a bit and cleaned up, for instance, it was doing case sensitive checks, but found cases where the value is all upper case, so it was missing it. Now it does case sensitive device type searches.
-
One of the oldest glitches in inxi was the failure to take the size of the raid arrays versus the size totals of the raid array components led to Local Storage results that were uselessly wrong, being based on what is now called 'raw' disk totals, that's the raw physical total of all system disks. Now if raid is detected the old total: used:... is expanded to: total: raw:... usable:....used:, the usable being the actual disk space that can be used to store data. Also in the case of LVM systems, a further item is added, lvm-free: to report the unused but available volume group space, that is, space not currently taken by logical volumes. This can provide a useful overview of your system storage, and is much improved over the previous version, which was technically unable to solve that issue because the internal structures did not support it, now they do. LVM data requires sudo/ root unfortunately, so you will see different disk raw totals depending on if it's root or not if there is LVM RAID running.
Sample: inxi -D Drives: Local Storage: total: raw: 340.19 GiB usable: 276.38 GiB lvm-free: 84.61 GiB used: 8.49 GiB (3.1%)
lvm-free is non assigned volume group size, that is, size not assigned to a logical volume in the volume group, but available in the volume group. raw: is the total of all detected block devices, usable is how much of that can be used in file systems, that is, raid is > 1 devices, but those devices are not available for storage, only the total of the raid volume is. Note that if you are not using LVM, you will never see lvm-free:.
-
An anonymous user sent a dataset that contained a reasonable alternate syntax for sensors output, that made inxi fail to get the sensors data. That was prepending 'T' to temp items, and 'F' to fan items, which made enough sense though I'd never seen it before, so inxi now supports that alternate sensors temp/fan syntax, so that should expand the systems it supports by default out of the box.
-
Finally was able to resolve a long standing issue of loading File::Find, which is only used in --debug 20-22 debugger, from top of inxi to require load in the debugger. I'd tried to fix this before, but failed, the problem is that redhat /fedora have broken apart Perl core modules, and made some of them into external modules, which made inxi fail to start due to missing use of required module that was not really required. Thanks to mrmazda for pointing this out to me, I'd tried to get this working before but failed, but this time I figured out how to recode some of the uses of File::Find so it would work when loaded without the package debugger, hard to figure it, turned out a specific sub routine call in that specific case required the parentheses that had been left off, very subtle.
-
Subtle issue, unlike most of the other device data processors, the USB data parser did not use the remove duplicates tool, which led in some cases to duplicated company names in the output for USB, which looks silly.
-
Somehow devtmpfs was not being detected in all cases to remove that from partitions report, that was added to the file systen filters to make sure it gets caught.
-
Removed LVM image/meta/data data slices from unmounted report, those are LVM items, but they are internal LVM volumes, not available or usable. I believe there are other data/meta type variants for different LVM features but I have added as many types as I could find.. Also explictly now remove any _member type item, which is always part of some other logical structure, like RAID or LVM, those were not explicitly handled before.
-
Corrected the varous terms ZFS can use for spare drives, and due to how those describe slightly different situations than simply spare, changed the spare section header to Available, which is more accureate for ZFS.
ENHANCEMENTS:
-
Going along with FIX 2 is updating and adding to intel, elbrus microarch family/ model/stepping IDs (E8C2), so that is fairly up to date now.
-
Added in a very crude and highly unreliable default fallback for intel: /sys/devices/cpu/caps/pmu_name which will show the basic internal name used which can be quite different from what the actual microarch name is, but the hope is that for new intel cpus that come out after these last inxi updates, something may show, instead of nothing. Note these names are often much more generic, like using skylake for many different microarches.
-
More android enhancements, for androids that allow reading of /system/build.prop, which is a very useful informative system info file, more android data will show, like the device name and variant, and a few other specialized items. You can see if your android device lets inxi read build.prop if you see under -S Distro: Android 7.1 (2016-07-23) or just Android. If it shows just android, that means it can't read that file. Showing Android however is also new, since while inxi can't always read build.prop if that file is there, it's android, so inxi finally can recognize it's in android, even though it can't give much info if it's locked down. Inxi in fact did not previously know it was running in android, which is quite different from ARM systems in some ways, but now it does.
If the data is available, it will be used in Distro: and in Machine: data to add more information about the android version and device.
-
A big one, for -p/-P/-o/-j now shows with -x the mapped device name, not just the /dev/dm-xx ID, which makes connecting the various new bits easier, for RAID, Logical reports. Note that /dev/mapper/ is removed from the mapped name since that's redundant and verbose and makes the output harder to read. For mapped devices, the new --logical / -L report lets you drill into the devices to find out what dm-xx is actually based on, though that is a limited feature which only supports drilling to a depth of 2 components/devices, there can be more, particularly for bcache, luks setups, but it's just too hard to code that level of depth, so something is better than nothing in this case, which is the actual choice I was faced, the perfect in this case really is/was the enemy of the good, as they say.
-
More big ones, for -a -p/-P/-o/-j shows kernel device major:minor number, which again lets you trace each device around the system and report.
-
Added mdadm if root for mdraid report, that let me add a few other details for mdraid not previously available. This added item 'state;' to the mdraid report with right -x options.
-
Added vpu component type to ARM gfx device type detection, don't know how video processing vcu had escaped my notice.
-
Added fio[a-z] block device, I'd never heard of that before, but saw use of it in dataset, so learned it's real, but was never handled as a valid block device type before, like sda, hda, vda, nvme, mmcblk, etc. fio works the same, it's fio + [a-z] + [0-9]+ partition number.
-
Expanded to alternate syntax Elbrus cpu L1, L2, L3 reporting. Note that in their nomenclature, L0 and L1 are actually both L1, so add those together when detected.
-
RAM, thanks to a Mint user, antikythera, learned, and handled something new, module 'speed:' vs module 'configured clock speed:'. To quote from supermicro:
<<< Question: Under dmidecode, my 'Configured Clock Speed' is lower than my 'Speed'. What does each term mean and why are they not the same? Answer: Under dmidecode, Speed is the expected speed of the memory (what is advertised on the memory spec sheet) and Configured Clock Speed is what the actual speed is now. The cause could be many things but the main possibilities are mismatching memory and using a CPU that doesn't support your expected memory clock speed. Please use only one type of memory and make sure that your CPU supports your memory.
-
Since RAM was gettng a look, also changed cases where ddr ram speed is reported in MHz, now it will show the speeds as: [speed * 2] MT/S ([speed] MHz). This will let users make apples to apples speed comparisons between different systems. Since MT/S is largely standard now, there's no need to translate that to MHz.
-
And, even more!! When RAM speeds are logically absurd, adds in note: check This is from a real user's data by the way, as you can see, it triggers all the new RAM per Device report features.
Sample: Memory: RAM: total: 31.38 GiB used: 20.65 GiB (65.8%) Array-1: capacity: N/A slots: 4 note: check EC: N/A Device-1: DIMM_A1 size: 8 GiB speed: 1600 MT/s (800 MHz) Device-2: DIMM_A2 size: 8 GiB speed: spec: 1600 MT/s (800 MHz) actual: 61910 MT/s (30955 MHz) note: check Device-3: DIMM_B1 size: 8 GiB speed: 1600 MT/s (800 MHz) Device-4: DIMM_B2 size: 8 GiB speed: spec: 1600 MT/s (800 MHz) actual: 2 MT/s (1 MHz) note: check
- More disks vendor!!! More disk vendor IDs!!! Yes, that's right, eternity exists, here, now, and manifests every day!! Thanks to linux-lite hardware database for this eternally generating list. Never underestimate the creativity of mankind to make more disk drive companies, and to release new model IDs for existing companies. Yes, I feel that this is a metaphore for something much larger, but what that is, I'm not entirely clear about.
CHANGES:
-
Recent kernel changes have added a lot more sensor data in /sys, although this varies system to system, but now, if your system supports it, you can get at least partial hdd temp reports without needing hddtemp or root. Early results suggest that nvme may have better support than spinning disks, but it really varies. inxi will now look for the /sys based temp first, then fall back to the much slower and root / sudo only hddtemp. You can force hddtemp always with --hddtemp option, which has a corresponding configuration item.
-
The long requested and awaited yet arcane and obscure feature -L/--logical, which tries to give a reasonably good report on LVM, LUKS, VeraCrypt, as well as handling LVM raid, both regular and thin, is now working, more or less. This took a lot of testing and will probably not be reasonably complete for a while, mainly because the levels of abstraction possible between lvm, lvm raid, mdraid, LUKS, bcache, and other caching and other encryption options are just too deep to allow for easy handling, or easy outputs. But a very solid and good start in my view, going from nothing to something is always a big improvement!! LVM reports require root/sudo. This will, finally, close issue #135.
-
Going along with -L, and serving as a model for the logic of -L, was the complete refactor of -R, RAID, which was a real mess internally, definitely one of the messiest and hardest to work with features of inxi before the refactor. It's now completely cleaned up and modularized, and is easy to add raid types, which was not possible before, now it cleanly supports zfs, mdraid, and lvm raid, with in depth reports and added items like mdraid size, raid component device sizes and maj:min numbers if the -a option is used. Note that LVM RAID requires root/sudo.
-
Added some more sensors dimm, volts items, slight expansion. Note that the possible expansion of sensors made possible by the recently upgraded sensors output logic, as well as the new inxi internal sensors data structure, which is far more granular than the previous version, and allows for much more fine grained control and output, though only gpu data currently takes advantage of this new power under the covers, although as noted, the /sys based hdd temps use the same source, only straight from /sys, since it was actually easier using the data directly from sys than trying to map the drive locations to specific drives in sensors output. Well, to be accurate, since now only board type sensors are used for the temp/fan speed, voltage, etc, reports, the removal of entire sensor groups means less chance of wrong results.
-
To bring the ancient RAID logic to fit the rest of inxi style, made zfs, mdraid, and lvm raid components use incrementing numbers, like cpu cores does. This got rid of the kind of ugly hacks used previously which were not the same for zfs or mdraid, but now they are all the same, except that the numbers for mdraid are the actual device numbers that mdraid supplies, and the LVM and ZFS numbers are just autoincremented, starting at 1.
-
Changed message <root/superuser required> to because it's shorter and communicates the same thing.
INTERNAL CODE CHANGES:
-
Small, transparent test, tested on Perl 5.032 for Perl 7 compatibility. All tests passed, no legacy code issues in inxi as of now.
-
Although most users won't notice, a big chunk of inxi was refactored internally, which is why the new -L, the revamped -R, and the fixed disk totals finally all can work now. This may hopefully result in more consistent output and fewer oddities and randomnesses, since more of the methods all use the same tools now under the covers. Ths refactor also significantly improved inxi's execution speed, by about 4-5%, but most of those gains are not visible due to the added new features, but the end result is new inxi runs roughly the same speed as pre 3.2.00 inxi, but does more, and does it better, internally at least. If you have a very good eye you may also note a few places where this manifests externally as well. Last I checked about 10-12% of the lines of inxi had been changed, but I think that number is higher now. Everything that could be optimized was, everything could be made more efficient was.
-
Several core tools in inxi were expanded to work much more cleanly, like reader(), which now supports returning just the index value you want, that always happened on the caller end before, which led to extra code. get_size likewise was expanded to do a string return, which let me remove a lot of internal redundant code in creating the size unit output, like 32 MiB. uniq() was also redone to work exclusively by reference.
-
Many bad reference and dereference practices that had slipped into inxi from the start are mostly corrected now, array assignments use push now, rather than assign to array, then add array to another array, and assign those to the master array. Several unnecessary and cpu/ram intensive copying steps, that is, were removed in many locations internally in inxi. Also now inxi uses more direct anonymous array and hash refernce assignments, which again removes redundant array/hash creation, copy, and assignment.
-
Also added explicit -> dereferencing arrows to make the code more clear and readable, and to make it easier for perl to know what is happening. The lack of consistency actually created confusion, I was not aware of what certain code was doing, and didn't realize it was doing the same thing as other code because of using different methods and syntaxes for referencing array/hash components. I probably missed some, but I got many of them, most probably.
-
Instituted a new perl builtin sub routine rule which is: if the sub takes 2 or more arguments, always put in parentheses, it makes the code much easier to follow because you see the closing ), like: push(@rows,@row); Most perl builtins that take only one arg do not use parentheses, except length, which just looks weird when used in math tests, that is: length($var) > 13 looks better than length $var > 13. This resolved inconsistent uses that had grown over time, so now all the main builtins follow these rules consistently internally.
Due to certain style elements, and the time required to carefully go through all these rules, grep and map do not yet consistently use these rules, that's because the tendency has been to use the grep {..test..} @array and map {...actions...} @array
-
Mainly to deal with android failures to read standard system files due to google locking it down, moved most file queries to use -r, is readable, rather than -e, exists, or -f, is file, unless it only needs to know if it exists, of course. This fixed many null data errors in android even on locked androids.
-
Added in %mapper and %dmmapper hashes to allow for easy mapping and unmapping of mapped block devices. Got rid of other ways of doing that, and made it consistent throughout inxi. These are globals that load once.
-
Learned that perl builtin split() has a very strange and in my view originally terrible decision that involves treating as regex rules string characters in split string, like split('^^',$string), which should logically be a string value, not a ^ start search followed by a ^, but that's how it is, so that was carefully checked and made consistent as well. Also expanded split to take advantage of the number of splits to do, which I had only used occasionally before, but only updated field/value splits where I have a good idea of what the data is. This is very useful when the data is in the form of field: value, but value can contain : as well. You have to be very careful however, since some data we do want in fact the 2nd split, but not the subsequent ones, so I only updated the ones I was very sure about.
-
Going along with the cpu microarch fixes, updated and cleaned up all the lists of model/stepping matches, now they are all in order and much easier to scan and find, that had gotten sloppy over the years.
-
More ARM, moved dummy and codec device values into their own storage arrays, that let me remove the filters against those in the other detections. Makes logic easier to read and maintain as well.
Poor Change Management...
Yeah, I really need to start doing better tracking of changes, but I often get distracted and fix three things at once.
Code cleanup. Lots of code cleanup. Remove references to "V1" and "V2" sector data, just use sector data. Much easier. Fix issues with JS not loading returned data. Fix icons for DS devices now using the same tag. Create single shared UDP client for "all the thingz". Socket management, yo. Clean up Hue stream creation/setup method. Fix log messages being long and all looking like errors. One method for DS sending, versus two that do variations of the same thing. Add custom method to set "device data", so that our DS emulator and normal device are the same. Still needs more cleanup. Lots of love for the Hue stuff. Almost got it 100% again. Did I mention DreamScreen streaming from Glimmr works again? Still need to check if it can receive. Lots more, I'm sure. ADD sucks.