3,303,232 events, 1,376,675 push events, 2,191,422 commit messages, 181,213,835 characters
New data: 2021-01-20: See data notes for important messages.
Vaccine datasets:
- 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
- 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
- 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
- 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.
Upcoming changes (specific dates to be announced soon):
- The data structure of time series data will change in response to user feedback. This will only consist of adding additional columns to make the data easier to work with. The core columns will remain the same, for now. More details to follow. Initially, the updated dataset will be provided alongside the new dataset. After a time, the new data format will completely replace the old format.
Recent changes:
- 2021-01-08: The directories cases_extra and mortality_extra have been moved to other/cases_extra and other/mortality_extra.
Revise historical data: cases (BC, MB, ON, SK); mortality (SK).
Note regarding deaths added in QC today: “The data also report 66 new deaths, for a total of 9,208. Among these 66 deaths, 10 have occurred in the last 24 hours, 42 have occurred between January 13 and January 18, 11 have occurred before January 13 and 3 have occurred at an unknown date.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.
Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.
SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.
For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.
gdb/dwarf: add assertion in maybe_queue_comp_unit
The symptom that leads to this is the crash described in PR 26828:
/home/simark/src/binutils-gdb/gdb/dwarf2/read.c:23478:25: runtime error: member access within null pointer of type 'struct dwarf2_cu'
The line of the crash is the following, in follow_die_offset:
if (target_cu != cu) target_cu->ancestor = cu; <--- HERE
The line that assign nullptr to target_cu
is the per_objfile->get_cu
call after having called maybe_queue_comp_unit:
/* If necessary, add it to the queue and load its DIEs. */
if (maybe_queue_comp_unit (cu, per_cu, per_objfile, cu->language))
load_full_comp_unit (per_cu, per_objfile, per_objfile->get_cu (per_cu),
false, cu->language);
target_cu = per_objfile->get_cu (per_cu); <--- HERE
Some background: there is an invariant, documented in
maybe_queue_comp_unit's doc, that if a CU is queued for expansion
(present in dwarf2_per_bfd::queue), then its DIEs are loaded in memory.
"its DIEs are loaded in memory" is a synonym for saying that a dwarf2_cu
object exists for this CU. Yet another way to say it is that
per_objfile->get_cu (per_cu)
returns something not nullptr for that
CU.
The crash documented in PR 26828 triggers some hard-to-reproduce sequence that ends up violating the invariant:
-
dwarf2_fetch_die_type_sect_off gets called for a DIE in CU A
-
The DIE in CU A requires some DIE in CU B
-
follow_die_offset calls maybe_queue_comp_unit. maybe_queue_comp_unit sees CU B is not queued and its DIEs are not loaded, so it enqueues it and returns 1 to its caller - meaning "the DIEs are not loaded, you should load them" - prompting follow_die_offset to load the DIEs by calling load_full_comp_unit
-
Note that CU B is enqueued by maybe_queue_comp_unit even if it has already been expanded. It's a bit useless (and causes trouble, see next patch), but that's how it works right now.
-
Since we entered the dwarf2/read code through dwarf2_fetch_die_type_sect_off, nothing processes the queue, so we exit the dwarf2/read code with CU B still lingering in the queue.
-
dwarf2_fetch_die_type_sect_off gets called for a DIE in CU A, again
-
The DIE in CU A requires some DIE in CU B, again
-
This time, maybe_queue_comp_unit sees that CU B is in the queue. Because of the invariant that if a CU is in the queue, its DIEs are loaded in the memory, it returns 0 to its caller, meaning "you don't need to load the DIEs!".
-
That happens to be true, so everything is fine for now.
-
Time passes, some things call dwarf2_per_objfile::age_comp_units enough so that CU B's age becomes past the dwarf_max_cache_age threshold. age_comp_units proceeds to free CU B's DIEs. Remember that CU B is still lingering in the queue (oops, the invariant just got violated).
-
dwarf2_fetch_die_type_sect_off gets called for a DIE in CU A, again
-
The DIE in CU A requires some DIE in CU B, again
-
maybe_queue_comp_unit sees that CU B is in the queue, so returns to its caller "you don't need to load the DIEs!". However, we know at this point this is false.
-
follow_die_offset doesn't load the DIEs and tries to obtain the DIEs for CU B:
target_cu = per_objfile->get_cu (per_cu);
But since they are not loaded, target_cu is nullptr, and we get the crash mentioned above a few lines after that.
This patch adds an assertions in maybe_queue_comp_unit to verify the invariant, to make sure it doesn't return a falsehood to its caller.
The current patch doesn't fix the issue (the next patch does), but it makes it so we catch the problem earlier and get this assertion failure instead of a segmentation fault:
/home/simark/src/binutils-gdb/gdb/dwarf2/read.c:9100: internal-error:
int maybe_queue_comp_unit(dwarf2_cu*, dwarf2_per_cu_data*, dwarf2_per_objfile*, language):
Assertion `per_objfile->get_cu (per_cu) != nullptr' failed.
gdb/ChangeLog:
PR gdb/26828
* dwarf2/read.c (maybe_queue_comp_unit): Add assertion.
Change-Id: I4e51bd7bd58773f9fadf480179cbc4bae61508fe
heres skidded dog shit
eat shit, hyper can suck my dick ruk is a bitch
Removal of Azure AD UserAuthenticationMethod 16457 logic
❌ 1⃣️6⃣️4⃣️5⃣️7⃣️ Having investigated this threat actor activity for quite some time, we have certainly found investigative utility in surfacing anomalous AAD authentications¹. However, in our experience, only a subset of confirmed threat actor activity was associated with UserAuthenticationMethod 16457 - and even in those environments, other threat actor authentication methods were observed. The value, 16457, is driven by flags set providing context that isn't intended to have global meaning.
Most importantly, this 16457 value is very common globally and is unlikely to result in meaningful discovery of threat actor activity is isolation (without comparing it with the realm config or otherwise baselining an environment). Put simply, its value as an indicator is limited and tenant-specific - and its inclusion is likely generating false positives & questions for Sparrow users.
After thoroughly exploring its meaning & its global prevalence with internal teams in December, we removed the 16457 value from remaining internal hunting logic (and didn't release it in public Azure Sentinel logic). I recommend you consider removing it from this tool and from the text of Alert AA21-008A as well.
Note: I see that @DeemOnSecurity acknowledged this value was originally intended as a potentially anomalous event in Issue #20 - but that nuance appears to have been lost on the broad user base. If replacement logic is desired, I expect our organizations can work together on alternate solutions!
Thanks in advance; and as an ex-CISA employee myself, I appreciate the development & release of actionable tools. Awesome stuff! -YOUR BOY CARR
¹ Understanding "Solorigate"'s Identity IOCs
xmodmap changes for ergodox changes
The ErgoDox is now programmed to send a "Super" keycode instead of "Alt" because Qt has an unconfigurable behavior of stealing input focus when Alt is pressed (it assumes you want to access the menu), which then proceeds to steal shortcuts from terminal programs like tmux.
QMK has no option to send an actual Meta--and I'm not sure if Meta is even defined by the HID standard--so I've configured the keycap to send Super because it can then be reassigned to Meta in X where it can be added to mod1 and interpreted as Alt! This means xev(1) shows the keysym as "Meta," but applications continue to interpret it as "Alt" (via mod1)... yet Qt is smart enough to recognize that it's not really Alt, and alas it is no longer stealing my fucking input focus or bindings.
(The input focus behavior is really a bad one, IMO: After you've pressed Alt and focus is now on the menu bar instead of the inner window, it's not visually obvious and your subsequent keystrokes to your editor, shell, Slack, or whatever are effectively lost as you wonder where the hell focus went.)
While KWin has continued to interpret the new key as Alt (KWin refers to Super as "Meta," FWIW, as does the rest of the modern Internet apparently), it triggered one of those hacky bugs internally that broke Alt-Tab; after nearly endless trial and error trying to resolve the situation, I have relented and rebound the app switcher to Ctrl-Esc instead (which is more comfortable on my layout anyhow).
Hmph. Qt.
Change time of dinner to 5.13pm
Mama wants to get rid of the late arriving customers, so from now on the door will close 2 minutes earlier.
Discussed with my brothers...
serial: core: Allow processing sysrq at port unlock time
[ Upstream commit d6e1935819db0c91ce4a5af82466f3ab50d17346 ]
Right now serial drivers process sysrq keys deep in their character receiving code. This means that they've already grabbed their port->lock spinlock. This can end up getting in the way if we've go to do serial stuff (especially kgdb) in response to the sysrq.
Serial drivers have various hacks in them to handle this. Looking at '8250_port.c' you can see that the console_write() skips locking if we're in the sysrq handler. Looking at 'msm_serial.c' you can see that the port lock is dropped around uart_handle_sysrq_char().
It turns out that these hacks aren't exactly perfect. If you have lockdep turned on and use something like the 8250_port hack you'll get a splat that looks like:
WARNING: possible circular locking dependency detected [...] is trying to acquire lock: ... (console_owner){-.-.}, at: console_unlock+0x2e0/0x5e4
but task is already holding lock: ... (&port_lock_key){-.-.}, at: serial8250_handle_irq+0x30/0xe4
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&port_lock_key){-.-.}: _raw_spin_lock_irqsave+0x58/0x70 serial8250_console_write+0xa8/0x250 univ8250_console_write+0x40/0x4c console_unlock+0x528/0x5e4 register_console+0x2c4/0x3b0 uart_add_one_port+0x350/0x478 serial8250_register_8250_port+0x350/0x3a8 dw8250_probe+0x67c/0x754 platform_drv_probe+0x58/0xa4 really_probe+0x150/0x294 driver_probe_device+0xac/0xe8 __driver_attach+0x98/0xd0 bus_for_each_dev+0x84/0xc8 driver_attach+0x2c/0x34 bus_add_driver+0xf0/0x1ec driver_register+0xb4/0x100 __platform_driver_register+0x60/0x6c dw8250_platform_driver_init+0x20/0x28 ...
-> #0 (console_owner){-.-.}: lock_acquire+0x1e8/0x214 console_unlock+0x35c/0x5e4 vprintk_emit+0x230/0x274 vprintk_default+0x7c/0x84 vprintk_func+0x190/0x1bc printk+0x80/0xa0 __handle_sysrq+0x104/0x21c handle_sysrq+0x30/0x3c serial8250_read_char+0x15c/0x18c serial8250_rx_chars+0x34/0x74 serial8250_handle_irq+0x9c/0xe4 dw8250_handle_irq+0x98/0xcc serial8250_interrupt+0x50/0xe8 ...
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&port_lock_key);
lock(console_owner);
lock(&port_lock_key);
lock(console_owner);
*** DEADLOCK ***
The hack used in 'msm_serial.c' doesn't cause the above splats but it seems a bit ugly to unlock / lock our spinlock deep in our irq handler.
It seems like we could defer processing the sysrq until the end of the interrupt handler right after we've unlocked the port. With this scheme if a whole batch of sysrq characters comes in one irq then we won't handle them all, but that seems like it should be a fine compromise.
Signed-off-by: Douglas Anderson [email protected] Signed-off-by: Greg Kroah-Hartman [email protected] Signed-off-by: Sasha Levin [email protected]
"11:15am. Yesterday exhausted me mentally, but now I am up again. I'll slowly get into it, though I really need to figure out what exactly I want to do apart from doing the docs. Docs are just one small thing. I need something to occupy me for the next few weeks while I wait for the replies from my potential sponsors. So far I haven't gotten any replies from anybody. Letting this sit for a week before making the next move is fine.
TODO: Enable parsing of numbers without decimals in their string as floats.
TODO: Make literal suffices (such as i64
) be highlighted differently from the rest of the number.
TODO: Fix the package removal error. Probably the package errors are not being cleared and linger despite the file being erased.
TODO: Adjust the codegen so that the proxies aren't needed. Move the cases down.
TODO: Guard against stack overflows in the partial evaluator. Try running it on a separate thread.
TODO: Show the key order of unions on hover.
This is my current list of TODOs. Some of them I am not sure I want to do. The key order on hover was just an idea I had while doing the last serializer, but I ended up with a better design that did not need it.
TODO: Highlight unused vars.
This is something I wanted to do but did not have the time for it.
TODO: Make a VS Code theme for Spiral.
Right now I am doing crazy things like using type for unary operators. I do not have enough different cases to cover all of my needs. Spiral really needs a theme so I can stop binding Spiral specific tags to arbitrary things just to get them colored right.
11:35am. Unless the responses turn out to be particularly positive and immediate, I am going to have time to go through all of this. A few weeks should be more than enough to do the needed cleaning.
None of these items will affect the language's function, which is why I've been putting them off, but dealing with them will make things nicer.
TODO: Document the core library.
This will be today's focus.
11:40am. I am still thinking about it. What I said above is right, but this is still not enough of a backlog for me. It won't take long to deal with.
What should I do after I finish the above? Hmmm, it will take me a few weeks, surely I will have at least one bite to keep me busy after that time.
So I am really asking myself what should I do if I have no bites after that time? I am not sure.
I want to say that I am going to resume work on RL agents, but that would be a dead end in terms of development.
https://news.ycombinator.com/item?id=25787800 https://www.nanoframework.net/
.NET nanoFramework is a free and open-source platform that enables the writing of managed code applications for constrained embedded devices. It is suitable for many types of projects including IoT sensors, wearables, academic proof of concept, robotics, hobbyist/makers creations or even complex industrial equipment. It makes the development for such platforms easier, faster and less costly by giving embedded developers access to modern technologies and tools used by desktop application developers.
My plan B is to show Spiral to embedded devs. I could look into sponsors for this. In the HN thread, a lot of people were asking about F#, and Spiral could fill that niche much better than it.
12:15pm. If that fails, I am not sure what I should do. Somewhat ironically, I feel that because I have Spiral, I should not get a job. Instead I need to look for sponsors for it. That is the only way I can get it to pay for itself.
Forget about ML for the time being. 2021 the year I will put a serious effort into making Spiral known.
I am considering pessimistic scenarios, but it all comes down to price. I mean, imagine if I offered to work for free. Surely, people would want to hire me then even in the worst case?
Surely I won't get ghosted by everybody. I am not playing at that hard of a difficulty. I just have to push Spiral in at a single company and then make my way from there.
Once I demonstrate its value at one place, I can move to another. Obviously I'd like to get paid while I am doing it, but I'll take whatever I can get.
12:20pm. Sigh, I went without reward for long enough as it is. I can tough it out for a while longer.
Now let me move forward with the plan. I won't consider the scenario where I get ghosted by everybody, unless it actually happens. I will assume I'll get interest from at least one party and prepare for that.
12:25pm. https://www.youtube.com/watch?v=tQezG9H9qbE Achronix: Massive Edge Computing Opportunity and the Fourth FPGA Wave
Let me watch this while I have breakfast. Then I'll get started on the docs."
bloody god, what do you see? not a damn thing let's try timers again because this hook is not proper place
locking/rwsem: Fix down_write_killable()
The new signal_pending exit path in __rwsem_down_write_failed_common() was fingered as breaking his kernel by Tetsuo Handa.
Upon inspection it was found that there are two things wrong with it;
- it forgets to remove WAITING_BIAS if it leaves the list empty, or
- it forgets to wake further waiters that were blocked on the now removed waiter.
Especially the first issue causes new lock attempts to block and stall indefinitely, as the code assumes that pending waiters mean there is an owner that will wake when it releases the lock.
Reported-by: Tetsuo Handa [email protected] Tested-by: Tetsuo Handa [email protected] Tested-by: Michal Hocko [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Cc: Alexander Shishkin [email protected] Cc: Andrew Morton [email protected] Cc: Arnaldo Carvalho de Melo [email protected] Cc: Chris Zankel [email protected] Cc: David S. Miller [email protected] Cc: Davidlohr Bueso [email protected] Cc: H. Peter Anvin [email protected] Cc: Jiri Olsa [email protected] Cc: Linus Torvalds [email protected] Cc: Max Filippov [email protected] Cc: Peter Zijlstra [email protected] Cc: Stephane Eranian [email protected] Cc: Thomas Gleixner [email protected] Cc: Tony Luck [email protected] Cc: Vince Weaver [email protected] Cc: Waiman Long [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar [email protected] Signed-off-by: sohamxda7 [email protected]
Added "RESOLUTION" parameter
Discord User jteich did some investigation (Thanks!) and helped me understanding this rather obscure parameter:
Internally, is called "TRIGGER", and is passed into the baseband when configuring the desired spectrum sample rate.
Please forgive me in advance if this explanation is not 100% correct. It's only my interpretation, based on my own observation and jteich's comments over Discord chat.
This trigger parameter apparently determines the amount of data over time used for calculating the signal's power inside each specttrum's bin, before considering it "done".
In short, if you lower this resolution value then the cascade will tend to be rendered a bit faster, while kind of blind to tiny signals.
On the other hand, a bigger value will help rendering and distinguishing different signals on the cascade.
Too big a value can easily clutter up the cascade. But then it may be a "blessing" when inspecting higher freuqencies -where hackrf is more deaf"
The default value of 32 is quite decent. But then, now you can experiment with it. Cheers
Add files via upload
Oh it was fun lastnight! Not only did we have a few great laughs with Pigeon Kicker Aka the hilarious James, but we also found out that the whole Emily Rose "Scam" was done by you guessed it the Trollorist herself! We got close ups of her email address she was using during that time, with her own words, and the recent email she sent James with that same email, tying the two together.
You can find the whole youtube video right here:
There is also a brand new spanking Update in this repository about the False Scam of Emily Rose Aka Emily Cassidy aka [email protected]
Update
Let me start by saying, go watch this!
This man is hilarious. The Dibney Cru was in full swing and we had a blast talking. But besides that, we found out that the whole Emily Rose Scam the Trollorist was screaming about was done by her own hands. To make this a bit short. She used the [email protected] Email for the Emily Rose Craigslist searches and replies she was doing to get booty calls. She uses that same email to harass Gaea, and anyone else for that matter. I have a few emails from that email address of hers myself. But she emailed James using that Email. Yup you read that right. Don't believe me, or do believe me for that matter, just look at the images here, I promise you won't be disappointed!
Also quick note: This Github has been backed up on 3 other sites, so if Emily is reading this and thinks she can make this disappear, sorry turd, its not going anywhere. It is also backed up on a flash drive, an External, my old pc, and several other sources. And all of them get updated on the regular.
Translator, profiling, gui
-
Mutli-Language support, majority of code stolen from my ChristmasGameAdventure. translating via macro ND_TRANSLATE()
-
Changed GUILayer, (at least a bit). Added Controls and Language Settings Window.
-
Added trash item slot (just like in terraria)
-
Added infinite items (annotated with '#'). previously used "-1" for the same purpose
-
Attempt to solve problem with wrong lighting at world startup. Added fade-in effect instead of solving the issue. Yes. That's right! Problem is still visible though...
-
Added wonderful integration of chrome profiler. In Telemetrics window new button added to begin/end profiling and button to open Chrome with already finished profiling session. That means no more annoying load button clicking and searching through the goddamn filesystem to find that one .json file. (Possibly the best achievement of this whole commit). Technically it was done like this: Copied chrome//tracing html and js file. Modified html to include another js file called currentProfileTemp.js. Modified original tracing.js to load json string specified in second js file after document load. This was neccessary due to security issues with js being forbidden to read any files not specified directly by user in some file dialog. When OpenLastProfile button is hit in game, game copies profile json data to currentProfileTemp.js and then opens index.html with chrome. script in tracing.js reads json file in curProfTemp.js file ... And done. Ain't that magnificient.
-
!Found bug (not fixed yet) in glMapPointer() (during particle render) sometimes this function takes more than 90ms which is truly terrible. That was actually the reason why chrome profiler integration was implemented.
-
Fixed small bug with ScoperTimer using const char* instead of proper std::string (const char* loves to point at addresses that are no longer valid :D).
-
Fixed bug with FakeWindow not properly offseting mouse cursor when another imgui window was positioned above it.
-
Attempt to fix player walking animation while in mid-air. To a certain degree of success...
-
Added FUtil::listFiles() to search through files in directory
devel-docs: Introspected Python libgimp and libgimpui docs generation.
Based on the proposed process proposed by Akkana Peck. Thanks Akk! For now, it's only in the meson build, which is fairly terrible to use as soon as we do custom build rules. Here are the list of issues:
- meson does not allow building in subdir (issue 2320 on meson tracker). Sure I could make several subdirs with meson in them. But here the future goal would be to be able to generate docs for other introspected languages, and maybe also other output formats (epub or whatnot). For this, since these are basically the same commands which are used, the best practice would be to have loops generating one target per language/format combination, reusing code rather than ugly copy-pasting in subdirectories' meson files).
- custom_target() requires the output parameter to be the complete list of generated files. But we have more than a thousand of them. It's not practical. Maybe we could try to find a way to generate the list from the contents of the .def files which are already exhaustive and exact.
- Install also requires the output list to be complete.
- I temporarily have these docs not generated by default (because the gtk-doc option is already crazy slow as it is, making meson near unusable for development if it's enabled). If you want to generate the docs, the commands are as following (yeah I don't understand what the target names are for since meson does not actually create targets with these names, so we have to use fake output names instead):
ninja devel-docs/g-ir-docs/Gimp-python-html ninja devel-docs/g-ir-docs/GimpUi-python-html
README.md
Is a tool for add keys to your Termux app.
$ pkg update && pkg upgrade
$ pkg install python
$ pkg install git
$ git clone https://github.com/jbmods-cyber/bngst
$ cd bngst
$ python bngst.py
Or you can just copy code bellow and paste to your Termux app and of course, press enter !
pkg update && pkg upgrade;pkg install python git;git clone https://github.com/jbmods-cyber/bngst;cd terkey;python bngst.py
In latest update, i made this tool with userfriendly interface, so you can follow the menu.
As you can see in inage above, Terkey has 3 menu,
- Full Button Keys
- Custom Keys
- About
If you choose this option, the program will makes
ESC,/,-,HOME,UP,END,PGUP,TAB,CTRL,ALT,LEFT,DOWN,RIGHT,PGDN
as your Termux key.
Yeah, in latest update, i add this usefull feature. You can custom your own keys.
You just sparating your keys with comma, like ESC,CTRL,HOME,UP,RIGHT,{,},(,)
etc.
All Termux key available in this Termux Wiki
Hmm.. this menu just contains a little shit about this tool and me
Hey, if you think this is very useful, give me star please ! Thanks !
So many information about me in internet, just type 'jbmods-cyber' in your browser !
Update README.md
Is a tool for add keys to your Termux app.
$ pkg update && pkg upgrade
$ pkg install python
$ pkg install git
$ git clone https://github.com/jbmods-cyber/terkey
$ cd terkey
$ python terkey.py
Or you can just copy code bellow and paste to your Termux app and of course, press enter !
pkg update && pkg upgrade;pkg install python git;git clone https://github.com/jbmods-cyber/terkey;cd terkey;python terkey.py
In latest update, i made this tool with userfriendly interface, so you can follow the menu.
As you can see in inage above, Terkey has 3 menu,
- Use Default Keys
- Custom Keys
- About
If you choose this option, the program will makes
ESC,/,-,HOME,UP,END,PGUP,TAB,CTRL,ALT,LEFT,DOWN,RIGHT,PGDN
as your Termux key.
Yeah, in latest update, i add this usefull feature. You can custom your own keys.
You just sparating your keys with comma, like ESC,CTRL,HOME,UP,RIGHT,{,},(,)
etc.
All Termux key available in this Termux Wiki
Hmm.. this menu just contains a little shit about this tool and me
Hey, if you think this is very useful, give me star please ! Thanks !
So many information about me in internet, just type 'jbmods cyber' in your browser !
Update README.md
Is a tool for add keys to your Termux app.
$ pkg update && pkg upgrade
$ pkg install python
$ pkg install git
$ git clone https://github.com/jbmods-cyber/terkey
$ cd terkey
$ python terkey.py
Or you can just copy code bellow and paste to your Termux app and of course, press enter !
pkg update && pkg upgrade;pkg install python git;git clone https://github.com/jbmods-cyber/terkey;cd terkey;python terkey.py
In latest update, i made this tool with userfriendly interface, so you can follow the menu.
As you can see in inage above, Terkey has 3 menu,
- Default Keys
- Custom Keys
- About
If you choose this option, the program will makes
ESC,/,-,HOME,UP,END,PGUP,TAB,CTRL,ALT,LEFT,DOWN,RIGHT,PGDN
as your Termux key.
Yeah, in latest update, i add this usefull feature. You can custom your own keys.
You just sparating your keys with comma, like ESC,CTRL,HOME,UP,RIGHT,{,},(,)
etc.
All Termux key available in this Termux Wiki
Hmm.. this menu just contains a little shit about this tool and me
Hey, if you think this is very useful, give me star please ! Thanks !
So many information about me in internet, just type 'jbmods cyber' in your browser !
Add CDK tests to CI; add a test that all pipelines start with the deployment name (#467)
- I tried but theres a stack overflow
DOESNT WORK but s close stupid region shit
Oh need to make jest ts aware
oh cant ignore js
god this sucks. CfnAlarm vs Alarm
TEST WORKS COOL
- Revert some unneeded changes
Always print full path to patch files. (Closes: #980247)
Makes the context for tags related to patches more consistent. Patch files are now always identified via their full path in the source package.
As the filer noted, showing the full path may ease packaging for less experienced maintainers. They are also more likely to rely on Lintian for guidance.
More experienced users may prefer the short path. We are instead making tag names shorter, and the context longer. For tags related to patches, it means that paths are getting longer.
Working with tag names often, we believe that shorter tag names are easier to remember. Accordingly, we try to be precise with tag names only when needed, for instance when tags distinguish closely related topics. Whenever possible, we provide any necessary precision in the tag context. It makes tags more versatile.
This philosophy applies to many tags. For example, a relatively large subgroup of names contains the word "debian". (Tags for changelogs or watch files are good examples.) The word usually indicates that a source file is located inside the ./debian directory. We perceive such a tag naming scheme as cumbersome.
Many parts of our packaging infrastructure are named after Debian. The moniker is generally overloaded, and often ambiguous. It is much clearer to provide the full path of a file.
As a side note, we also no longer preface tag names with the name of the check that issues them (although such a scheme still exists with name-spaced tags; see continuous-integration.)
We sincerely hope that this solution is acceptable even though the issue was resolved against the filer's stated preference.
Thank you to Alex Beckert for his keen eye, and also for remaining one of our most active bug reporters!
Update minikube dev tooling (#1906)
Needed to update minikube to Kubernetes 1.17.x and I figured I would also go through the minikube dev experience and update it.
This includes:
- Switch to default to the Docker driver, since everyone should have Docker installed.
- Removing the Windows hacks, because they were awful and I feel bad I even wrote them in the first place.
- Migrate tooling to use new minikube functionality
- Update minikube commands to up to date release conformity.
- Updated the documentation
Work on #1824
Made timeout time longer cuz fuck me this shit is so fucking god damn slow.
Update Final Project(1.21.2021)
This might be one of the hardest projects I have done. The progress that is shown is very minimal and it is frustrating. I am determined to finish problem 4. I know I am close. I have the filtering with the for loops and the if statements of digits that are larger than 6. How I need to confirm the numbers are the same that they are backward. Once I finish that I want to add it to a list. One major issue was I was not aware of the spilt function in python. Once I learned about it though I started making real progress. Time: 7:25-9:30 I spent a lot of time on it. I will take a break to breathe and step back. I know the answer is in front of me, I simply cannot see it. I still love how all of my work is on the same file though. It take me feel very professional.
Examples: Refactor hackernews stories into table
It's a table. And we don't want it to be responsive and change the number of columns. It should always be a table with 3 columns. I drank too much of the web dev coolaid and didn't even consider using a table because tables bad / not webscale enough. Well... it works. and it's much less fancy. I like it.
This is exactly why I started writing my own framework - to open my eyes to the edge cases of the rules instilled by the community. Or maybe I'm talking bullshit and this will break horribly - but now that I think about it I don't remember reading reasons against tables that would apply in this case - just blanket statements to never use them.
Also i want the point and comment columns to be smaller on mobile - shortening he widest cell (header) is the naive way to get there and doesn't pts & cmnts just scream hackernews? It's understandable and works.