4,086,969 events, 1,574,400 push events, 2,458,117 commit messages, 186,979,397 characters
Add files via upload
How does the COVID-19 impact adults' mental health?
The Implications of COVID-19 for Mental Health and Substance Use by Nirmita Panchal, Rabah Kamal, Cynthia Cox, and Rachel Garfield. (https://www.kff.org/coronavirus-covid-19/issue-brief/the-implications-of-covid-19-for-mental-health-and-substance-use/)
Men, jobless and people with mental health diagnoses most vulnerable in 2020 overdose spike by Corrie Pikul. (https://www.brown.edu/news/2021-09-17/overdose)
Background information: The epidemic has brought people not only pain but also internal trauma, with many people suffering from anxiety and depression in a closed environment. Key findings: Depression is most common among 18 - to 24-year-olds, many of whom turn to alcohol and drugs for relief. Implications: (Here are the words from experts) History has shown that the mental health impact of disasters outlasts the physical impact, suggesting today’s elevated mental health need will continue well beyond the coronavirus outbreak itself. For example, an analysis of the psychological toll on health care providers during outbreaks found that psychological distress can last up to three years after an outbreak. Due to the financial crisis accompanying the pandemic, there are also significant implications for mortality due to “deaths of despair.” A May 2020 analysis projects that, based on the economic downturn and social isolation, additional deaths due to suicide and alcohol or drug misuse may occur by 2029.
[MIRROR] 1984: Suppressed Bans (#4190)
-
1984: Suppressed Bans (#5251)
-
Suppressor Permission Critical fix to ban notes
-
Confirmation alert
-
Allow Temp Bans Relevant terminology changes
-
I totally did not not do this at 3am
-
FUCK I FORGOT TO UPDATE THE UI TGUI SUCKS BUT HOLY SHIT THIS STUFF IS /AWFUL/
-
irc>tgs
-
1984: Suppressed Bans
Co-authored-by: Gallyus [email protected]
Add computer scientist job
r6223 2021-09-28
- Add entries to messages.properties
- ScienceType.computing entry
- SkillType.computing
- job.male.ComputerScientist = Computer Scientist
- job.female.ComputerScientist = Computer Scientist
- Add ComputerScientist class.
- Add in various buildings.
- Add computing_topics.json.
lib/sort: make swap functions more generic
Patch series "lib/sort & lib/list_sort: faster and smaller", v2.
Because CONFIG_RETPOLINE has made indirect calls much more expensive, I thought I'd try to reduce the number made by the library sort functions.
The first three patches apply to lib/sort.c.
Patch #1 is a simple optimization. The built-in swap has special cases for aligned 4- and 8-byte objects. But those are almost never used; most calls to sort() work on larger structures, which fall back to the byte-at-a-time loop. This generalizes them to aligned multiples of 4 and 8 bytes. (If nothing else, it saves an awful lot of energy by not thrashing the store buffers as much.)
Patch #2 grabs a juicy piece of low-hanging fruit. I agree that nice simple solid heapsort is preferable to more complex algorithms (sorry, Andrey), but it's possible to implement heapsort with far fewer comparisons (50% asymptotically, 25-40% reduction for realistic sizes) than the way it's been done up to now. And with some care, the code ends up smaller, as well. This is the "big win" patch.
Patch #3 adds the same sort of indirect call bypass that has been added to the net code of late. The great majority of the callers use the builtin swap functions, so replace the indirect call to sort_func with a (highly preditable) series of if() statements. Rather surprisingly, this decreased code size, as the swap functions were inlined and their prologue & epilogue code eliminated.
lib/list_sort.c is a bit trickier, as merge sort is already close to optimal, and we don't want to introduce triumphs of theory over practicality like the Ford-Johnson merge-insertion sort.
Patch #4, without changing the algorithm, chops 32% off the code size and removes the part[MAX_LIST_LENGTH+1] pointer array (and the corresponding upper limit on efficiently sortable input size).
Patch #5 improves the algorithm. The previous code is already optimal for power-of-two (or slightly smaller) size inputs, but when the input size is just over a power of 2, there's a very unbalanced final merge.
There are, in the literature, several algorithms which solve this, but they all depend on the "breadth-first" merge order which was replaced by commit 835cc0c8477f with a more cache-friendly "depth-first" order. Some hard thinking came up with a depth-first algorithm which defers merges as little as possible while avoiding bad merges. This saves 0.2*n compares, averaged over all sizes.
The code size increase is minimal (64 bytes on x86-64, reducing the net savings to 26%), but the comments expanded significantly to document the clever algorithm.
TESTING NOTES: I have some ugly user-space benchmarking code which I used for testing before moving this code into the kernel. Shout if you want a copy.
I'm running this code right now, with CONFIG_TEST_SORT and CONFIG_TEST_LIST_SORT, but I confess I haven't rebooted since the last round of minor edits to quell checkpatch. I figure there will be at least one round of comments and final testing.
This patch (of 5):
Rather than having special-case swap functions for 4- and 8-byte objects, special-case aligned multiples of 4 or 8 bytes. This speeds up most users of sort() by avoiding fallback to the byte copy loop.
Despite what ca96ab859ab4 ("lib/sort: Add 64 bit swap function") claims, very few users of sort() sort pointers (or pointer-sized objects); most sort structures containing at least two words. (E.g. drivers/acpi/fan.c:acpi_fan_get_fps() sorts an array of 40-byte struct acpi_fan_fps.)
The functions also got renamed to reflect the fact that they support multiple words. In the great tradition of bikeshedding, the names were by far the most contentious issue during review of this patch series.
x86-64 code size 872 -> 886 bytes (+14)
With feedback from Andy Shevchenko, Rasmus Villemoes and Geert Uytterhoeven.
Link: http://lkml.kernel.org/r/f24f932df3a7fa1973c1084154f1cea596bcf341.1552704200.git.lkml@sdf.org Signed-off-by: George Spelvin [email protected] Acked-by: Andrey Abramov [email protected] Acked-by: Rasmus Villemoes [email protected] Reviewed-by: Andy Shevchenko [email protected] Cc: Rasmus Villemoes [email protected] Cc: Geert Uytterhoeven [email protected] Cc: Daniel Wagner [email protected] Cc: Don Mullis [email protected] Cc: Dave Chinner [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: Yousef Algadri [email protected] Signed-off-by: Panchajanya1999 [email protected] Signed-off-by: Forenche [email protected]
Update README.md
The day after tomorrow is the last fucking working day before my lovely national day holiday
Update README.md
Abstract- Retail Analytics space is evolving everyday towards providing a better customer experience every day. Image based product search is one such area of work. Similarly, this project explores the idea in the online eye frames product space. The project is about comparison of image features and selecting the top 10 similar images to a given eye frame image. INTRODUCTION (MOTIVATION) here are many times when we find it difficult to recall the exact name of a product or cannot describe it clearly for search or would love to reorder what we saw someone else wearing. In all these cases and many more, “visual product search” is what comes to the rescue. Instead of describing, we could just take a picture of the product and search for it. This work is about developing a model which would find most similar images to a particular product image. An article on mycustomer.com explains how Google has revolutionized the way we search. Consumers now demand image-based search functionalities when shopping online (not just text-based search). In a survey of 1,000 consumers, it was found that three quarters of consumers (74%) said traditional text-based keyword queries are inefficient in helping them find the right items online. Another 40% would like their online shopping experience to be more visual, image-based and intuitive. Potential customers are window shopping “by image” on search engines like Google. Retailers whose search engine optimize their product images and listings can gain a competitive advantage with consumers who prefer to shop this way. RELATED WORK AND ARTICLES Below are the related research papers and articles for reference about the project idea: A. S. Umer, Partha Pratim Mohanta, Ranjeet Kumar Rout, Hari Mohan Pandey: Machine learning method for cosmetic product recognition: a visual searching approach A cosmetic product recognition system is proposed in this paper. For this recognition system, a cosmetic product database has been processed that contains image samples of forty different cosmetic items. The purpose of this recognition system is to recognize Cosmetic products with their types, brands and retailers such that to analyze a customer experience what kind of products and brands they need. This system has various applications in such as brand recognition, product recognition and also the availability of the products to the vendors. The implementation of the proposed system is divided into three components: preprocessing, feature extraction and classification. During preprocessing the color images were scaled and transformed into gray-scaled images to speed up the process. During feature extraction, several different feature representation schemes: transformed, structural and statistical texture analysis approaches have been employed and investigated by employing the global and local feature representation schemes. Various machine learning supervised classification methods such as Logistic Regression, Linear Support Vector Machine, Adaptive k-Nearest Neighbor, Artificial Neural Network and Decision Tree classifiers have been employed to perform the classification tasks. B. Image Classification for E-Commerce — Part I- https://towardsdatascience.com/product-image-classification-with-deep - learning-part-i-5bc4e8dccf41 In this article, it is explained how images are used to solve one of the most popular business problems i.e. classification of products. A giant online marketplace like Indiamart has thousands of macro categories for listing various products. A product must get mapped under the most appropriate micro category on the platform. The goal of this post is to build intuition and understanding of how neural networks can be trained to identify the micro category of a product using its images. C. Building a Reverse Image Search Engine: Understanding Embeddings- https://www.oreilly.com/library/view/practical-deep�learning/9781492034858/ch04.html This page contains the process of building reverse image search engine i.e. it consists of steps like performing feature extraction and similarity search on Caltech101 and Caltech256 datasets, learning how to scale to large datasets (up to billions of images), making the system more accurate and optimized, analyzing case studies to see how these concepts are used in mainstream products. The page has information of locating similar images with the help of embeddings. It contains work on a level further by exploring how to scale searches from a few thousand to a few billion documents with the help of ANN algorithms and libraries including Annoy, NGT, and Faiss. Also, the page has process of fine tuning the model to your dataset can improve the accuracy and representative power of embeddings in a supervised setting. To top it all off, it has work on how to use Siamese networks, which use the power of embeddings to do one-shot learning, such as for face verification systems. T PGDDSAI-IIITD 2 DATASET DESCRIPTION AND PREPROCESSING The dataset for the problem is a collection of 5570 eye frames. The file contains product name, product ids, frame shape, parent category and URLs of the eye frame images. The parent category feature has 3 classes: Eye frame, Sunglasses and Non-Power Reading while the frame shape feature has 4 classes: Rectangle, Wayfarer, Aviator and Oval. The data was preprocessed so that it could be fed to the model for training. A dictionary was prepared which contained the array values of the images from URLs and values against product ids as the key. This image data was fed to pretrained VGG16 for feature generation. The features generated were appended to the main data frame against their respective product ids. Each image is resized to (224,224,3) and size of the features generated is (7,7,512). METHODOLOGY After the features from VGG16 are generated and stored for 5570 images in the dataset. A new data frame is prepared which contains product ids, parent category and frame shape. An artificial neural network (ANN) is trained on the dataset through functional API with inputs as the features and prediction of two outputs as Frame shape and Frame category. A new image when uploaded by user is first processed through VGG16 for feature generation, The features generated are made as the input to the trained ANN for its frame shape and category prediction. After the frame shape and category prediction of the submitted image, the feature array of the image is compared to the features of the subset of frames of the predicted shape and category from the main data frame of features. This approach speeds up the process of similarity score calculation since only a subset is considered for calculation and not the complete dataset. Pairwise Cosine similarity is used as a metric for calculating the similarity of features. The similarity score thus generated are sorted in a descending order and URLs for top 10 similarity scores are selected for output. RESULTS The ANN has Adam (learning rate=0.01) optimizer and Sparse categorical Entropy loss with Accuracy as metric of evaluation and was trained for 250 epochs with a batch size of 128 and validation split of 20%. After training, the model achieved 97.2 % accuracy, 0.0845 loss for category prediction and 90.4% accuracy, 0.2700 loss for shape prediction. Below are the plots for training accuracy and training loss. The evaluation of final results i.e., if the recommended eye frames are similar to the eye frame image uploaded, displayed depends on user evaluation. After this, a flask API was developed for improving the user interface for model testing
Added some resiliency to global error handler so it doesnt cause error if the error object its attempting to handle doesnt have a message.
users.selector and its invoking services, auth guards, validators etc- selectUserByExactDisplayName now returns only the user object associated with that account.
create-account component- trying to trouble shoot the displayName uniqueness validator and/or form control for displayName. as it stands, the display name taken error stays in markup whether or not the error is currently in effect. For example, you provide a display name that is taken and get the error, and delete a chunk of it, meaning there is no longer a match, but the error stays there regardless.
navbar- little progress here, we took the console log we were finally getting last night and used it as a spring board for changes made today- ideally we will finish what were doing in auth service then come back to this and try using a different selector such as selectById or selectBy DispalyNameExactMatch.
user.model- ties into auth service refactoring, we got rid of metadata property, which was unnessecary/ a quick cheap hack.
auth service- doing quite a bit of refactoring here, which we are in process of working through. In addition to adding documentation for all the methods, reduced the complexity of the googleLogin method and got some ideas for further improvement of all the sign in and account creation related methods. We will finish this refactoring by giving these methods return values to tell their callers how to handle the results of these processes, perhaps/probably using promises.
generally, trying to improve the quality of code in auth service instead of putting that off and continually having to put out fires in the component logic that depends on this service.
"7:50am. I am set on my course. If Z did not just go and die, once he gets back I'll just convince him that I am best fit even if he rejects me. This situation is different from applying at corps where HR roasties randomly toss resumes in the trash. I have nothing to be afraid of. I just need to manifest my determination. The only thing good about having 4 weeks to think about it is that I've been able to realize just how much I want to do this particular work on PPLs.
Of course, I need to have a backup plan since nothing is certain, so I'll also make the solo path which is the new Simulacrum arc.
Now, let me check out the mail. 3 rejects from some random places. That makes 4 so far. What a waste of time.
8am. Let me chill a bit and then I will start.
8:10am. Let me start.
$$$
Scene 2 - Room Intro - The camera scene
The progress bar filled up and the program informed me that the uplink has been installed. Then it instructed me to try the Uplink and the Camera. Closing the Converter, I open the Uplink and I am startled to see the desktop overlaid in front of my vision as if it was burned into my retina. I close my eyes and notice that instead of the usual darkness I can now make out every vivid detail of it.
Opening my eyes I look around the room noting that despite my gaze having the desktop seemingly imprinted on it, I can still make out the objects in my actual vision without issue.
I try walking around.
"This is wild."
Seating myself back, I reach for the mouse out of habit and remember that I should be able to access the desktop directly through the Uplink. I try willing it and I see the cursor move. I look at the top of the monitor and take note of the webcam attached to it. I've barely used it since I got it, but it will allow me to have an interesting experience.
The Camera the core comes with has a feature in that it can route the visual feed through the uplink. I open the app, see the camera come online on the monitor, and after some deliberation to mentally ready myself, hover the mouse over the 'Route to uplink' switch and click.
I am greeted at the surreal scene of looking at myself from the camera. It is impossible to describe the feeling of suddenly being able to see through a third eye using the existing human lexicon, but that was what was happening. It feels like a miracle is happening.
Closing my own two eyes, the camera feed is still stays vividly imprinted as a sense and as I shift and move in my seat, I note that the changes are reflected in the feed just as expected. Reaching for the top of the screen, I unhook the feed from the top and play around with it for a while by pointing it at various things.
After 10m or so the wonder becomes stale and with a mental command, I turn off the camera and put it back on its original place at the top of the monitor.
Leaning back on my seat, my heart feels bubbling with emotion as I ponder the possibilities. This experience only served to reinforce my belief even further that technology is a joy ride. I had forgotten when I had gotten my first computer, but I remember the sensation of playing games for the first time. As time passed, the experience of living in such a technological era gave me a certain indeterminate belief that I could achieve great things if I could somehow bring the power I put into the games outside. There was a dichotomy of games being their own little realities, and yet life outside passing on with only superficial changes, as if computers were the new brand of clothes one wears. Noticing this made desire to see the bubble of normality break.
There is a deep sense of unfairness of playing games, winning at them, but none on the outside taking notice, while people running around kicking balls around were somehow deserving of respect, all for playing games mere children could. The previous generation of hardware was just too weak, but having an excess of 10^30 in computational capacity compared to regular brains should give me the power to achieve something. I'd really be a moron not to given my programming abilities. I just need to play games in a better way than ordinary people can. I'll find the right beliefs and the right things to think, and that should be enough to win at life.
Maybe 'it' would have been impossible before, but 'it' should be possible in this era.
What 'it' is I am not sure yet.
$$$
Let me resume from where I left off 2 days ago.
8:25am. Two days from now, I'll do the PL review.
9:25am. Let me take a short break.
9:55am. Let me resume. It is time for the next scene.
$$$
Scene 2 - Room Intro - Diving into Simulacrum
The computation graph of my own mind reflected beyond the darkness of my eyes as I lie on my bed, I start Simulacrum for the first time. The load does not even last an instant and I sink into the virtual, feeling the outside reality being pushed out from my being. The sensation is quite relaxing, like falling into slumber.
As I wait for something to happen, the familiar poem that I would see many times in the future plays out:
In the endless darkness
Roam endless monsters
Pain, cold, flame
Age, time, death
Torment
Light and shadow
Holy and hell
The inevitable fate of the Universe
Will never touch
The courage of the Inspired
And the power of the Transcendi
The scene manifested before me. Panning to the sky, highlighted in brilliant hues of light like a veil of gold covering it was a floating city. Against a backdrop of blue, it seemed like a distant object made of gold. I felt the sense of depth in my vision, so I could sense that floating city was massive. Then I felt the warmth of the sun and the rustle of wind. I get the sense that I am high up, and will my head to turn down. I notice the bustle and the humdrum of a modern metropolitan city. Skyscrappers dot the landscape and deep down below I can see cars and pedestrians walking the streets.
The vision seems very crisp and vivid. After a few moments of pondering, I realize why that is the case. In the real world I have to wear glasses so everything is blurry past a certain distance, but here I am unrestricted in my resolution. Just by focusing I can see for thousands of meters as if I was standing a foot away. Picking a random spot on the street, I can make out the minor cracks on the pavement, and the grime and the wear from people walking over it for many years. Dressed in autumn clothing, the people as they go along their life feel lifelike and real. I see them talking, exclaiming, laughing, being tired, downcast and otherwise broadcasting their emotions.
"..."
I found the whole scene very impressive. It did not seem like something that humans could create.
As soon as I start wondering how to start the game, the menu comes up in a separate sense that feels like my vision. Load Game is greyed out, but Start Game and Quit are there, so I select the former and enter the Scenario Selector. Simulacrum itself is more like an engine for running different scenarios out of which they are hundreds. Browsing over the available scenarios, all of them seem interesting and I am having trouble deciding so I go back to top and look at the recommended one.
Heaven's Key.
'The souls of those passed on enter the afterlife to play the grand game of gambling God has prepared for them. The second chance is given, and can be lost. Can you survive the challenges of your second life and go on to challenge God himself?'
Along with that brief description, came some setting information. When I focused on the map icon to the side it unfolded to a 3d model in my mind. I realized that the world of Heaven's Key was covered with orbital islands, much like the floating one in the intro. The tech level is similar to the current time in 2025.
The description makes it sound like there will be some gambling action. I am not into online poker, I played it for a bit. I am more into games I can explore. Still...imagine I played something like Hell. No way can I fight demons and study my own mental program at the same time. Online poker is fast and furious, but sitting on a real table will give me time to study both myself and others.
So I guess this scenario is fine. Having made my decision, I started the game and felt being drawn into it.
$$$
Hmmm, there is a VAE-GAN? Never heard about them before. The results are not impressive at all.
Only this even with a full day of training? Yeah, I am in trouble. It will be the same situation as with poker all over again. I'll have to get creative if I want to get anywhere with ML. I will not try it the same way as I have before. I've learned how terrifying computational limits can be. I'll leave figuring out the metaheuristic algorithm to the researchers, while I build in my biases.
10:25am. Let me get the old poem.
...I've forgot what it is, and got a feeling as I read it again. It is really a good one.
10:40am. I think anybody who ever reads it will have the 'courage of the Inspired' and the 'power of the Transcendi' etched into their mind. I myself forgot the rest after this time, but I still remember the last part.
It feels like I am meeting an old friend. It is a pity, I am much more weathered and experienced now, and I do not come bearing gifts, but rather to ask for a loan. This time I'll not write the story to declare victory to the world after finding a great secret, but to let my sense for evil guide me to the solution out of my present predicament.
10:45am. I have some ideas, though they are not earthshaking, the normies will squirm in their seat.
https://youtu.be/KwRrhQ77GZc VAE-GAN | Tech Bytes
Let me take a short break to watch this 4m video.
https://experiments.withgoogle.com/search?q=generative%20adversarial%20network
Hmmm, this paper is pretty popular. But it is not like I've been paying much attention to this area.
11am. I am out of focus. Let me do the next scene. The big shining city in the sky.
11:35am. Had to take a break. Let me continue writing.
1pm. Let me get breakfast.
2:30pm. Let me catch up with Hero King and I will resume my own writing.
2:50pm. Focus me, I need to continue.
$$$
Scene 3 - Heaven's Key - Intro
The method of recursive self improvement via iterated suicide can be used in reality. In fact, the technique represents the most viable path to getting superpowers in any kind of reality.
- Loading Blurb
Whereas during the menu segment I had been a disembodied presence hovering thousands of feet above the cityscape, once the game started I found myself standing upright in an actual body. I gawked like a tourist, taking in the sights.
"This is amazing." I concluded out loud, and looked down over my body, noting that it is much the same as in the real world. The game even recreated the clothes I usually wear when out. I am pretty comfy right now.
Looking behind I notice the unfolding landscape. Right now I seem to be very high up. I step foward and look over the railing, confirming my suspicion. I am at the very edge of one of those floating cities that I saw from below in the intro. I try very hard not to imagine how would it be to fall from here. Reflexively, my hands tighten on the railing.
I...
Option { * Throw myself over for fun. * Turn away from the railing. }
$$$
Every time the player logs in there should be an intro blurb, like had been at the top of old Simulacrum chapters. As much as I want to stick 'Necromutation is Transcendence' in there, I need to think of something suitably evil.
'The method of recursive self improvement via iterated suicide can be used in reality. In fact, the technique represents the most viable path to getting superpowers in any kind of reality.'
I'll have to write a bunch of these redpills. Is it too soon to start bringing in this particular redpill in?
3:25pm. I'll go with it. I can always change my plan later.
3:50pm. It is fun to think of various scenarios. Let me do the throw over the railing scene.
$$$
Scene 3 - Heaven's Key - Go Over The Rail
I really shouldn't, but the idea is just too fun not to try out. My instinctive reaction would be to back away from certain doom, but I remembered that everything here is fiction. So the sensation of danger as I visualize hurling down has no basis in reality. Even if I died here, I can just reload. It is not like I would lose any progress in the game since this is the very beginning.
Think about what I would do if I listened to my instincts all the time. Imagine if the scenario was different and the game said it was time to go to school or get to work, would I spend all my time doing menial tasks because that is the right choice or would I take risks since I am effectively immortal? The way to make it fun is this.
Gulp. I swallow my saliva as a sick, sweet feeling washes over me. I get the sense that out there somewhere a part of me that is pure reason has won over and is having a lot of fun. An unsupressed shudder wracks my body, but it feels pleasurable somehow as if my restraints are coming undone. I start to feel feverish as I decide to do what from a sensory perspective suicide.
"Hello, are you new-"
My eyes shine with a delirious intensity and I grin, no longer scared at all of the horrifying height. Giving into my self-destructive urge, I put one leg over the railing and then the other.
"NO, WAI-"
Looking forward to it, I do not even notice a woman desperately rushing forward to try and grab me with an outstretched hand. Nor do I see the shocked expression on her face when she barely misses me.
Instead I feel the wind stinging past me and spread out my body like an eagle, feeling free. I am excited, and the wind feels nice and cool as I free fall, tempering my fever.
"Ahhhhh..." It feels really serene for while to just float in the air, descending past the skyline and enjoying the whistling of wind as it brushes past me. I completely forget about myself. After a while, I stopped looking at the horizon, and looked down at the approaching cityscape. The thing I threw away, namely fear, started to creep up at me as my mind conjured images of how I would splatter in the middle of a street somewhere.
Involuntarily, I start to grit my teeth and start to grasp around with my arms as if to look for a handhold to get back up.
Waves and waves of terror that I cannot resist come over me.
"AHHHHHHHH!!!" I let out a long scream. Flailing my limbs like crazy, I go into a tailspin.
OHSHIT, OHSHIT, OHSHIT!!!!
'Ahahahahaha!!!' If my reason had a voice it would be laughing happily right about now. I sensed a disquieting voice inside me doing just that. Somehow I've always had the ability to sense it, despite being gripped in abject panic like now.
OHSHIT, OHSHIT, OHSHIT!!!!
That very brief moment of introspection was a mere impulse in this very dangerous situation. I scream, I roll, I flail, I grasp the air. I am sensing the end - WHAT END! - coming ever closer.
OHSHIT, OHSHIT, OHSHIT!!!!
AWAWAWAWA!!! WHAT DO I DO, WHAT DO I DO!?
'Ahahahahaha!!!' My reason was having fun.
My shattered panicking subconscious started to get its bearings and my aerial form started to stabilize. Feeling the air pushing at my back I looked to the side and set chrome reflection of the skyscrapper glass. Realizing what this means, I lose control again.
THIS ISN'T FUNNY!!! I NEED TO GET OUT!!!
Somehow I stabilize my posture again, except this time it ends up facing head first towards the busy streets down below. I see a lot of pavement rushing towards me.
An idea finally occurs to me.
LOG OUT! WHERE IS THE LOG OUT BUTTON!? LOG OUT!
I frenziedly start searching for it.
"AHHHHHHHHHH!!!!" My mouth is busy screaming as usual, draining attention from more productive pursuits.
LOGOUT,LOGOUT,LOGOUT,LOGOUT,LOGOUT,LOGOUT!!!
In the meager time remaining, and with the ground fast approaching, I scream that word as loudly as I can in my mind hoping that it works. It does not.
IT IS OVER!!!!!
'Ahahahaha, stupid, my brain is so stupid, ahahahaha!!!' My voice of reason was absolutely content at the absurd situation, happy that I decided to throw myself into the unknown, ebulient at the suffering of the lizard parts. A childish, vicious bully that is always right. This time it got its way for better or for worse.
At the point of contact I blank out. I am not sure if it is from terror or if the game itself cut off, but I do not feel the sensation of my skull shattering and every bone in my body breaking from the impact from such height. Looking from above I am sure I must have made a lovely red blot in the middle of the street somewhere.
Instead, my body doing a little flop like a fish on dry land, with a jolt I wake up in my room, lying on top of the bed where I dived in.
"Hah,hah,hah..." I pant heavily, swallowing air by great mouthfuls like man's first drink after wandering the desert for days. My heart feels like it would burst. It took a few minutes for me to calm down, after which I note that I am completely drenched in sweat.
"Phew..." I sit upright and conclude:
Option { * That was great! (Gnosis +4) * I never want to do that again! (Gnosis -1) }
Not caring about the sweat, I get back into the game.
$$$
4:15pm. $$$ Loading Blurb - Picking choices that increase AI risk are bad for society. But they are good for you. $$$
Let me write that one down for later.
4:30pm. I am having a difficult time making the words come out as I am tired. I know what I want to write though. I just need a break. Writing these scenes strains my imagination, and I run out of energy to do it properly easily.
4:40pm. Let me take a break then. After that I'll finish the Gnosis +4 scene.
5:25pm. Let me resume.
6:30pm. Oh, it is this late? How could I spend so long on that scene. I meant to do a bit more.
6:35pm. Lunch time.
The next scene is meeting with the guide.
I am thinking whether I should merge Pathos and Willpower. It might be a good idea. Yeah, let me do it. I'll go over the previus scenes in the journal and edit them accordingly.
6:55pm. Externus, Pathos and Gnosis. Those will be the 3 main mental and social stats.
7:05pm. Let me close here. Is Tog out? I am really enjoying the fight between Bam and White. I'll also go for the extra content after all in Hollow Knight."
Frustrating. We had navbar working toyishly for a second, using selectUserByDisplayNameExactMatch it would show the displayName of the logged in and hide log in button.
Then, since this solution doesnt work for non google auth provider users, tried troubleshooting selectUserById selector, and using that in navbar, now neither approach works, even though we tried completely reconstructing the code used in the previous working implementation. IT still doesnt work, which makes me think that some problem exists somewhere else. sucks though, I thought we had finally cracked this very stubborn bug.
- autotune is the new magic, says randy marsh
- Shelly's room, evening. Randy knocks on her door *
Randy Shelly, that's enough time on your phone.
Shelly Leave me alone, Dad! Stop nagging me all the time!
Randy You know we're all cutting down on phone time.
Shelly [sits up]
Don't limit me! You don't even understand me!
Randy [sees a poster of himself as <'famous' "musician">, his secret identity]
Yeah. I don't understand you at all. A lot you know.
[walks away saddened]
* The Marsh garage *
-
Randy is adding more stacks of cash to those already *
-
hidden behind the poster. A door opens and Randy *
-
quickly seals it up. *
-
He gets to his workbench just as Stan closes the door. *
Stan Uh hey Dad. I need to talk to you.
Randy Oh really? A-About... about what?
Stan Dad, is it possible for someone to be one way on the outside but totally different on the inside?
[Randy sighs deeply and stands up to walk]
I mean, can someone identify as one sex but be
something else but still have it be nothing about sex?
Randy Yes. Yes, Stan. I am <'famous' "musician">.
Stan ...What?
Randy It started off so simple. There's a guy at work. Hanson. He would use the bathroom and just blow the thing up, you know? Not only that, but he was in there all the time! I finally got fed up and pretended to be a woman. I called myself <'famous' "musician">. Have you ever been in a woman's bathroom, Stan? It's all clean and there's enough stalls for everyone. It was so freeing. I started singing while I was in there, and then I- started writing things down.
Stan Well you said you knew a guy at work who was <'famous' "musician">'s uncle.
Randy Yah, that's my cover.
Stan The chick that wrote the theme song to the new , is you?
Randy Yeah.
[turns around and faces Stan]
The record company messed it all up. It was supposed to go:
"<shitty recession stimulus-funded book and movie series>,
yah yah yah, yah yah yah! <shitty recession stimulus-funded
book and movie series>."
But they just- do what they want with my songs.
Stan Wha-wait, <'famous' "musician"> sounds like a girl.
Randy Autotune. Wanna see how I do it?
[moments later, a music program pops up.
Twelve tracks are shown at lower left]
I come up with all my best stuff in the bathroom at work.
I use this program to import the recordings I make on my phone.
[plays the highlighted track]
"Yeah yeah, feeling good on a Wednesday. Sparklinnnnn'
thoughts. Givin' me the hope to go ohhhn"
[farts and poop noises]
"Oh! Whoa. What I need now is a little bit of shelter."
Stan Dad, <'famous' "musician">'s music is actually really good.
Randy Thanks.
But it gets even better when I add the drum loops.
[replays the same track with drum loops added]
Then with the computer I can actually quantize everything.
[brings up the quantizer and chooses his settings]
Backup instruments.
[scale, beats, bass, tambourine, guitars, strings]
And then finally I use the Autotune.
["Auto-Tuner v10." He chooses his settings there, and
the song is transformed. The same track is now enhanced
with <no name shitty "musician">'s voice and no trace of Randy]
"Sparklin' thoughts, feelin' good on a Wednesday.
Givine me the hope, givin' givin' me the hope to go ohhhn.
What I need is a little bit of shelter."
[this is all too much for Stan to take in, and he passes out.]
[Randy notices]
Stan?
m68k: Leave stack mangling to asm wrapper of sigreturn()
sigreturn has to deal with an unpleasant problem - exception stack frames have different sizes, depending upon the exception (and processor model, as well) and variable-sized part of exception frame may contain information needed for instruction restart. So when signal handler terminates and calls sigreturn to resume the execution at the place where we'd been when we caught the signal, it has to rearrange the frame at the bottom of kernel stack. Worse, it might need to open a gap in the kernel stack, shifting pt_regs towards lower addresses.
Doing that from C is insane - we'd need to shift stack frames (return addresses, local variables, etc.) of C call chain, right under the nose of compiler and hope it won't fall apart horribly. What had been actually done is only slightly less insane - an inline asm in mangle_kernel_stack() moved the stuff around, then reset stack pointer and jumped to label in asm glue.
However, we can avoid all that mess if the asm wrapper we have to use anyway would reserve some space on the stack between switch_stack and the C stack frame of do_{rt_,}sigreturn(). Then C part can simply memmove() pt_regs + switch_stack, memcpy() the variable part of exception frame into the opened gap - all of that without inline asm, buggering C call chain, magical jumps to asm labels, etc.
Asm wrapper would need to know where the moved switch_stack has ended up - it might have been shifted into the gap we'd reserved before do_rt_sigreturn() call. That's where it needs to set the stack pointer to. So let the C part return just that and be done with that.
While we are at it, the call of berr_040cleanup() we need to do when returning via 68040 bus error exception frame can be moved into C part as well.
Signed-off-by: Al Viro [email protected] Tested-by: Michael Schmitz [email protected] Reviewed-by: Michael Schmitz [email protected] Tested-by: Finn Thain [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Geert Uytterhoeven [email protected]
tegra: lcd: video: integrate display driver for t30
On popular request make the display driver from T20 work on T30 as well. Turned out to be quite straight forward. However a few notes about some things encountered during porting: Of course the T30 device tree was completely missing host1x as well as PWM support but it turns out this can simply be copied from T20. The only trouble compiling the Tegra video driver for T30 had to do with some hard-coded PWM pin muxing for T20 which is quite ugly anyway. On T30 this gets handled by a board specific complete pin muxing table. The older Chromium U-Boot 2011.06 which to my knowledge was the only prior attempt at enabling a display driver for T30 for whatever reason got some clocking stuff mixed up. Turns out at least for a single display controller T20 and T30 can be clocked quite similar. Enjoy.
frustration
sadness, confusion, bad feelings, despair, depression, knives...
AI Objective Panels-An informational Catastrophe. (#27)
- AI Objective Panels-An informational Catastrophe.
Adds non-functional Waypoint code to cyborgs. God Damn you Bot Code.
Introduces a new A.I Verb, "Cyborg Management Panel", with a full UI detailing Linked Cyborgs, their Charge, Location, and allows the A.I to send Private messages.
Also introduces a Objective crossreference system for A.I's and Cyborgs. Cyborgs are marked as either Occupied or Unoccupied on the A.I's Management panel(also found on their own Status tabs). Cyborgs can manually switch their readiness for tasking with the "Toggle Flag" verb.
A.I's can set an objective for any linked Cyborgs from their Management Panel. Cyborgs can always find their current assigned objective in their status panel. Alternatively, Cyborgs are free to set their own objectives with the "Set Current Objective" verb.
- Makes Cyborg Task system function/QOL improvements in code.
Updates readability of Status Tab Additions.
Adds sound to Toggling Flag verb to indicate it's succeeded.
Incites the For() gods to fix every problem i've ever had. Fixes Self-Set objectives not working. Finalizes Cyborg Task System. Cleans up unused code.
- Commenting out all Cyborg Waypoint Code
I honestly can't be bothered to figure out why the code refuses to tick, and it's going to take someone with a little more know how to crack that. Please take a stab at it. I've left all the code i've tried in, just commented out.
- Removes Commented Out Cyborg Waypoint Code.
Removes Commented out unfinished waypoint code for cyborgs. Perhaps another day.
- Better Audio and Visual Indications of Objective Updates.
Cyborg Status panel now records priority of previously assigned objective.
Sound added for Cyborgs when their objective is remotely updated by the A.I.
Fixed line 1284 not reading it's to_chat to the usr. It was missing a Semicolon.
Var/objectivesconfirm default(shiftstart) updated from "Unregistered" to "Medium" to avoid generating confusion.
- Fixes Busystatus placement on verbs.
"Set Current Objective" verbs Busystatus update has been moved beneath the objective update input. Previously it would update you to busy even if you pressed no. Now, it does not.
Moved objectivesupdate() proc busystatus and playsound above to_chat instead of below. It was previously not reading.
Cyberbullied
The vulnerability that affects The whole platform build manufacturers hardware internet of all things all from an android mirroring or cloned on a system linux mac windows or chromebook doest matter the chromebook they control or have there own broadcaster network who have privledges to do so looking as if your only an android while in an enterprise or a team of developers i am a user with no skills in computers Now when if and if i do look at the files you'll see that there are files for systems after debugging the android myself i find .bin dat selinux files and so on even different certs for windows keys for macs there are so many more vulnerabilities that effect not only the tech world but evryday peoples lifes due to the fact that its a phone with a camera records and takes photos has apps where we post our lives on the internet and if the attacker has or can take control or observe your ios or android smartphone remotley or done by ACE inject code to toggle the network or wifi and take over creudentials rewrite code while the phone may look like its updated the sdk is one step lower from api that it pushes it through cause android is null status and are in another location diffrent from the clone phone and not to mention playstore or samsung apps that are on the telephone have demanding controling apps that have permissions that can enourmous take over your phoneand you have no clue with all the background workthe system does ACE injection from system remotley with the apks with the sdk sandboxing and scripted injections from system overwtmriting Code all the while your phone in your hand has only 1st child permissions and you think your the only one able to acsses your phone but there is a sidecar remotely or networkstacked bootstraped statically and the phone sim and a virtual sim the attacker can add another unknown network or change wifi creudentials remotely by doing this it gives the attacker the chance to inject code with java thru the apk just dont know what an SO. files is i can acsses this info from pro dev tools for android in playstore where your not safe they have your andoid smothered buy a bigger one that sucks and sends in. What ever data they choose or pick to inject and erase create rewrite files which brings me to the vulnerability is so illicect undetectable fraudulent and fatal due The fact we live in a cyberworld with a strawman and virtual citizen the vulnerability and the attackers can be the virtual citizen with technology these days the attacker can make phone calls and answer phone calls reroute and send messages emails receiving rerouting calls in and out while using a voiceover synthesizer that disguises the victims voice allowing the attacker to remain as if they were the victims strawman on the telephone in real live time with realtime text as they type the text the synthesizer speaks the victims voice dubs it in reality effect strawman virtual citizen and with no knowledge of the internet or android capabilities this attack is so highly overlooked as if the end user is teamed up with a devoloper or belongs to a enterprise when manufacturer remotley connects to look at the data and ask if the phone belonged to a developer I dont code all I know is when i debug my phone i get all these files from the debug i they highjack the virtual citizen with out strawman knowing that there identity is being used fraudulently and secretivly the virtual citizen in timbucktoo your strawman at home sleeping dreaming of timbucktoo its aglobal concern
v1.2.2
-Fixed an NPE, thanks Dave! -Fixed several bugs with the remove s-mods feature and discovered 2 more that weren't fixed (they're honestly not worth the time to fix and I bet you won't even notice them) -Added settings to:
- disable the random s-mods hullmod (instead allowing standard s-modding), which kinda makes this whole mod moot but whatever maybe you just want s-mod removing features
- disable the remove s-mods hullmod, as god intended (still default false)
- allow standard s-modding mechanics (which can be used in conjunction with random s-modding)
- set min number of s-mods to remove
- set max number of s-mods to remove
some updates to make 'call' addresses be [UNDEF] if calling an imported syumbol. Also some refactoring everything into 1 file to make my life easier. It's gonna make the code kinda long but i needed to share some variables between files, so i said fuck it