Skip to content

Latest commit

 

History

History
1215 lines (781 loc) · 75.9 KB

2021-09-09.md

File metadata and controls

1215 lines (781 loc) · 75.9 KB

< 2021-09-09 >

3,134,626 events, 1,590,638 push events, 2,410,680 commit messages, 189,312,030 characters

Thursday 2021-09-09 00:22:45 by CJ Bradley

Delete README.md

This Project analyses the happiness Index around the world utilizing python. In the middle of a pandemic, millions of citizens around the world have lost their jobs. They have watched helplessly as their savings dwindled, as they were confined to their homes—prohibited from interacting with friends, attending church, temple, or music, and even sporting events due to restrictions enacted in response to the COVID-19 pandemic. These situations have incremented the statistics and indices related to depression and suicide. My project is to identify the societal factors that the government will need to focus on for bringing society together to produce a sense of relief and happiness. The Happiness Index is a valuable resource for determining the societal factors that influenced a country's happiness index before and after the COVID-19 pandemic.


Thursday 2021-09-09 01:13:19 by gagan sidhu

update to 47382. new gluten-free themes available!

Pricipal Victoria All right everyone, thanks for coming. As you know, we urgently need to discuss the matter of Butters Stotch, who set fire to the school gymnasium and is now asking to come back. Are we all set to start? Woman Almost. We're just waiting on Mr. Mackey. Again. Mr. Adler Awww, do we need Mackey here? Mr. Garrison Yeah, all he's gonna talk about is how he's gluten-free now and feels 'sooo fucking amazing'. Principal Victoria Well, you have to admit he does look a little better. Mr. Adler He doesn't look any different to me. Principal Victoria In the cheeks, you don't think he looks a little fuller? Mr. Garrison It's just the new diet fad! [a door opens] Mr. Mackey Sorry I'm late. I had to stop and get my own breakfast because I figured y'all would be having doughnuts, but I'm actually gluten-free, so I can't have doughnuts, m'kay? Principal Victoria Yes, Mr. Mackey, we're all aware that you're gluten-free now. Mr. Mackey I'm just sayin' that I personally feel sooo fuckin' amazing

  • not gonna rain on anyone's excitement over the new themes.

Thursday 2021-09-09 02:01:07 by Cheese Curd

holy shit that's a lot, uh yea just look at the code lol

look at DataShit.hx for some bullshit lmfao


Thursday 2021-09-09 03:25:48 by Arafat Hoshen

#8 Django portfolio website source code [Bangla]

In This Tutorial I'm going to teach you, How to create portfolio website with Django [Bangla] In This website I Create Navbar,Home,Blog, About and Contacts page using Bootstrap.

Watch video from My YouTube cannel: https://www.youtube.com/channel/UCV6vGLwmJneo7leWpgjVBDA

what is Django ? Django is a high-level Python web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of web development, so you can focus on writing your app without needing to reinvent the wheel. Django is an extremely popular and fully featured server-side web framework, written in Python. Django allows you to quickly create web apps.

Why we should learn Django ?

  1. Ridiculously fast. You can develop any kinds of website with very first. And Django has powerful community so they can support you in future.

  2. Django is high-secured. Django takes security seriously and helps developers avoid many common security mistakes.

  3. Exceedingly scalable. Some of the busiest sites on the web leverage Django’s ability to quickly and flexibly scale.

  4. Already many largest company developed the website by Django. YouTube, Instagram, Spotify, Pinterest, DropBox, Bitbucket and many other big companies are building their websites with Django.

  5. Now Django is so popular in marketplace. If you want to a freelancer as a web-developer Django is perfect for you.


Thursday 2021-09-09 05:06:47 by Kyle Oliver

Holy shit we have memory addresses being acquired injected function contexts and it's actually fucking working. I'm going to go cry myself to sleep and try not to think about how much time this fucking took these past few days.


Thursday 2021-09-09 06:59:57 by Seng Yong Hoo

completed processing all programs (fuck yeah, even got the edge cases) it was a massive slog and mostly involved scouring my data structure to catch edge cases and also designing the processsed program structure but, i got it done HELL YEA now time to migrate this to postgres (ugh, another pain in the ass)


Thursday 2021-09-09 09:16:15 by Asheem Siwach

Add files via upload

MALIGNANT COMMENT CLASSIFICATION

Submitted by: Asheem Siwach

ACKNOWLEDGMENT I would like to thank Flip Robo Technologies for providing me with the opportunity to work on this project from which I have learned a lot. I am also grateful to Mr. Shubham Yadav for his constant guidance and support. Reference sources are :- • Google • Stackoverflow.com • Analytics vidhya • Notes and repository from DataTrained Research papers – https://www.nltk.org/book/ch05.html https://www.analyticsvidhya.com/blog/2017/01/ultimate-guide-to-understand-implement-natural-language-processing-codes-in-python/

INTRODUCTION

• Business Problem Framing The proliferation of social media enables people to express their opinions widely online. However, at the same time, this has resulted in the emergence of conflict and hate, making online environments uninviting for users. Although researchers have found that hate is a problem across multiple platforms, there is a lack of models for online hate detection. Online hate, described as abusive language, aggression, cyberbullying, hatefulness and many others has been identified as a major threat on online social media platforms. Social media platforms are the most prominent grounds for such toxic behaviour. There has been a remarkable increase in the cases of cyberbullying and trolls on various social media platforms. Many celebrities and influences are facing backlashes from people and have to come across hateful and offensive comments. This can take a toll on anyone and affect them mentally leading to depression, mental illness, self-hatred and suicidal thoughts.

• Conceptual Background of the Domain Problem Internet comments are bastions of hatred and vitriol. While online anonymity has provided a new outlet for aggression and hate speech, machine learning can be used to fight it. The problem we sought to solve was the tagging of internet comments that are aggressive towards other users. This means that insults to third parties such as celebrities will be tagged as unoffensive, but “u are an idiot” is clearly offensive. Our goal is to build a prototype of online hate and abuse comment classifier which can used to classify hate and offensive comments so that it can be controlled and restricted from spreading hatred and cyberbullying.

Analytical Problem Framing • Mathematical/ Analytical Modeling of the Problem This was a NLP Project and in this project we deals with the textual data and for understanding the data we used some methods like removing punctuations ,numbers, stopwords and using the lemmatization process convert the complex words into their simpler forms. These processes helped in cleaning the unwanted words form the comments and we were left with only those words which were going to help in our model building. After cleaning the data we used TF-IDF Vectorizer technique to convert textual data into vector form. This technique works on the basis of the frequency of words present in the document. After training with train dataset we use this technique into test dataset. • Data Sources and their formats This data was provided to me by FlipRobo Technologies into a csv file format. This file contains training and testing dataset. On training dataset we build a model and using this model we have to predict the outcomes from testing dataset. The data set contains the training set, which has approximately 1,59,000 samples and the test set which contains nearly 1,53,000 samples. All the data samples contain 8 fields which includes ‘Id’, ‘Comments’, ‘Malignant’, ‘Highly malignant’, ‘Rude’, ‘Threat’, ‘Abuse’ and ‘Loathe’. The label can be either 0 or 1, where 0 denotes a NO while 1 denotes a YES. There are various comments which have multiple labels. The first attribute is a unique ID associated with each comment.
The data set includes: -Malignant: It is the Label column, which includes values 0 and 1, denoting if the comment is malignant or not. -Highly Malignant: It denotes comments that are highly malignant and hurtful. -Rude: It denotes comments that are very rude and offensive. -Threat: It contains indication of the comments that are giving any threat to someone. -Abuse: It is for comments that are abusive in nature. -Loathe: It describes the comments which are hateful and loathing in nature.
-ID: It includes unique Ids associated with each comment text given.
-Comment text: This column contains the comments extracted from various social media platforms. First of all we upload the training dataset into df_train dataframe and check the datatypes present in it.

So the data present in training data is both object and integer in nature. 2 columns are object dtypes and 6 columns are integer in dtypes.

• Data Preprocessing Done Data preprocessing is the data mining technique that involves transforming raw data into an understandable data format. So in data preprocessing technique first step is import data and the libraries to be used in model building.

Now after loading the dataset we check the missing values –

So there were no null or missing values in our dataset we move to next step of data cleaning – In data cleaning we dop the ID column as it gives no information. After dropping it we created another feature “negative_cmnts” which shows the labelled data of positive and negative comments.

Now in cleaning the textual data we created a function to remove unwanted space, punctuation , numbers, emails , phone numbers etc and converted upper case letters into lower case and append the result into a new column.

After the removal of unwanted notations we moved to remove stopwords from our dataset. For removing them we created another function named stop_words and append the text into another new column.

After removing all the unnecessary words or numbers we converted the words into their simpler form using the Lemmatization process. In this process we defined two function , first one will tag the words into proper format and the other function will convert them into simpler form using the tagged alphabet.

Now we took a random sample from our dataset and compared the text of before and after the treatment.

Encoding the categorical data (Feature Extraction and scaling) As of now we had cleaned the data , now another major step towards building a model is to convert the textual data into numerical form because our algorithms understand only the numerical data. So for converting into numerical or vector form we used Tf-Idf Vectorizer technique which converted the textual data into the vectors using the terms frequency method.

Splitting the dataset – The last step for data-preprocessing is splitting the dataset into training and testing dataset Using the train_test_split model selection method we converted the dataset into training and testing.

• Data Inputs- Logic- Output Relationships

All the harmful comments present in dataset are having positive correlation with each others. Rude and abuse are highly correlated.

• Hardware and Software Requirements and Tools Used There is no such requirement for hardware ,but I have used intel i5 8th generation processor with 6 GB Ram. Software: Jupyter Notebook (Anaconda 3) Language : Python 3.9 Libraries used in project: a. Pandas b. Numpy c. Matplotlib d. Seaborn e. Sklearn

Model/s Development and Evaluation • Identification of possible problem-solving approaches (methods) In this project there were 5-6 features which defines the type of comment like malignant , hate , abuse, threat, loathe but we created another feature named as “negative_cmnts” which is combined of all the above features and contains the labelled data into the format of 0 and 1 where 0 represents “NO” and 1 represents “Yes”. As we have labelled data into our target feature which is the case of classification method. So we are going to use algorithms based on classification.

• Testing of Identified Approaches (Algorithms) Based on the classification approach we are going to use following approaches : I. Logistic Regression II. Decision Tree Classifier III. RandomForest Classifier IV. AdaBoost Classifier V. GradientBoosting Classifier In NLP we use Naïve Bayes algorithms mostly but due to our systems we faced memory error to deal with them.

• Run and Evaluate selected models

I. Logistic Regresison -

II. DecisonTree Classifier –

III. RandomForest Classifier –

IV. AdaBoost Classifier –

V. GradientBoosting Classifier –

• Key Metrics for success in solving problem under consideration

                                           For solving the problems and understanding the result of each algorithms we have used a lot of metrics :

a) Accuracy Score b) Classification Report c) Confusion Matrix d) Log Loss e) Roc-Auc

• Visualizations For the Visualization we have used Matplotlib and Seaborn library to plot the numerical data into graphs –

• Interpretation of the Results

                                           As our target feature is imbalanced which means that alone accuracy score will not give best results but instead accuracy we are using the classification report log loss score , Roc-Auc score and confusion matrix to find the best algorthim which is above all.

So we selected Logistic Regression and RandomForest Classifier based on above condition and use RandomizedSearchCV for hyper tunning the parameters of these 2 selected algorithms . After hyper tunning the parameters we selected the Logistic Regression algorithm as it give upto 85% score of Roc-Auc and losgg loss of 1.38 far better than RandomForest Classifier whose log loss was above 2.0 and again the classification report also tells that it is better than RandomForest Classifier.

• Predictions

These are the predictions rom our testing dataset.

CONCLUSION • Key Findings and Conclusions of the Study The conclusion for our study :- a) In training dataset we have only 10% of data which is spreading hate on social media. b) In this 10% data most of the comments are malignant ,rude or abuse. c) After using the wordcloud we find that there are so many abusive words present in the negative comments. While in positive comments there is no use of such comments. d) Some of the comments are very long while some are very short.

• Learning Outcomes of the Study in respect of Data Science From this project we learned a lot. Gains new techniques and ways to deal with uncleaned data. Find a solution to deal with multiple target features. Tools used for visualizations gives a better understanding of dataset. We have used a lot of algorithms and find that in the classification problem where we have only two labels , Logistic Regression gives better results compared to others. But due to our system we could not use algorithms which gives much better results in NLP project like GaussinaNB, MultinomailNM. We also used googlecolab and some pipelines techniques but none of them worked here and also it was too much time consuming. • Limitations of this work and Scope for Future Work This project was amazing to work on , it creates new ideas to think about but there were some limitations in this project like unbalanced dataset, multiple target features. To overcome these limitations we have to use balanced datset so that during the training of dataset our algorithm will not give biased result.


Thursday 2021-09-09 09:59:15 by Sergey

chore: Clean-up copy-pasted Compass styles from all the packages (#2448)

  • chore: Clean-up copy-pasted Compass styles from all the packages

This was not as straightforward as initially expected, but brings a few benefits:

  • All plugins are now referencing compass styles from the single source
  • No need for download-akzidenz scripts in any package except for Compass
  • Just cleans up a bunch of cobweb covered code in the repo that is hard to justify now
  • Fixes fonts and icons when running plugins as a standalone playgroud

This change required updating all less loaders in all the packages and more explicitly providing information what is a css module and what is not based on the file name, so almost all less files god renamed.

This change also required to hackily refer to mongodb-compass as a library in a few places creating a recursive dependency, but imo this clean up is worth it and eventually we can move all things that are actually shared between plugins and compass in their own, separate place

Extremely sorry in advance for a massive diff that this generated, I'm happy to go through the whole thing together with whoever feels brave enough to review it

  • chore: Remove unused font files

  • fix(compass-connect,compass-sidebar,compass-loading): Change gray color reference to grayDark

  • chore: Remove accidentally commited transform

  • fix(compass-crud): Do not use :global in non-module less files


Thursday 2021-09-09 11:35:33 by SgtHunk

[SEMI-MODULAR] [READY?] Antagonist Tips - preference for it included. (#187)

  • The siren song begins to playy

Look who's laughing nowwwwwwwwwwwwww

  • She appears to have tossed the hamburger outside t

he automobile

readme.md alphabetizes because fuck you my ocd demands it (i dont actually have ocd please don't cancel me i just don't like unalphabetized lists) Deletes IAA tips (IAA can't even trigger on dynamic!!!! I'll re-add them when IAA happens!!)


Thursday 2021-09-09 11:58:56 by Eric Myhre

schema-schema: switch to use of keyed unions.

This is a big switch. At least, syntactically, in the DMT form.

It's a tiny switch, almost nothing at all, in the logical form. (Neat how that works, isn't it?)

The primary driver for this is that it makes the schema-schema use simpler features of the schema system. This is something we're taking as explicit design goal for the schema-schema from now on (and applying as a reason for some significant "fixes" to previous choices like this one, too):

The schema-schema should intentionally use only as many of the features of schemas as necessary, and it should prefer using the features that we feel are reasonable to expect to be implemented early in a new IPLD library in a new language.

That design rule means inline unions are out.

(I actually added documentation about that design goal a few commits ago, in prose, in one of the other specs pages; now it's being enacted!)

(A quick rant about inline unions:

Inline unions are one of the most complicated things in the whole system to write a correct parser for. They're riddled with edge cases and tricky corners. And making them efficient? Fwoosh. Efficiently parsing an inline union is a absolute nightmare; there's possibilities for lookahead and lookback and reprocess galore. They're orders of magnitude more complicated than any other feature. Frankly, I'd never have specified them at all if in a total greenfield; the reason the inline union strategy is present in IPLD Schemas at all is because of our broader goal to "describe preexiting data, and be able to embrace conventions which were already popular". Outside of compatibility missions, I would now never recommend the use of inline union strategies in new protocols.

And on top of that -- personally -- I think inline unions suck to read. Keyed unions have become the most readily legible thing to me, after a few months of reading both. In my opinion, after this diff, the schema-schema JSON document is significantly less bewildering.

... Right. Rant complete. Moving on.)

The diff here is unfortunately large (nearly total, in the JSON), because keyed unions add a level of indentation where inline did not. Still: worth it.


Thursday 2021-09-09 14:23:09 by gisimon23

Create blumenstock.md

The possible data science developments that Joshua Blumenstock discusses in his article are utopian and their and appear to be a positive attempt and furthering international development. There is definitely promise in the ways to target those of need in order to get the help they need. Proper utilization promises getting resources to those in need in a timely cost-effective manner. Collaboration among systems appears to promote a more flawless tactic for pinpointing areas that seek development. As good as everything sounds, there are some pitfalls. Blumenstock elaborates on at least four including effects, lack of validity, bias, and lack of regulation. It's possible that efforts remain out of reach for the most vulnerable and impact those who do not necessarily need the aid. There is always room for error when it comes to digital data collection. Algorithms can often be biased and disregard certain outliers and marginalized groups. Many groups are unwilling to completely open themselves up through technology as a need for privacy and protection is at stake. The statements made by students are all very inquisitive and have meanings to consider. Anna’s statement has a lot of truth to it. Not just in this scenario but with many, sometimes plans with good intentions tend to miss the mark and do more harm than good. This relates to the unanticipated effects that Blumenstock mentioned. Nira’s message about transparency is true but not necessarily a make or break point in both data based and human based issues. I do think it is important for those creating, collecting, and distributing data in order to further international development should remain transparent about their efforts. If researchers, analysts, etc. were more open, individuals may be more willing to open and problems such as the lack of regulation that Blumenstock mentioned could be solved. Kayla’s quote is a lot to unpack. There is a lot of promise in what is being proposed but it comes with possible consequences such as improper representation and marginalization. It will be a challenge to create a proper system that will account for those not engaging in certain aspects like social media leaving them out of some of the proposed solutions. It can be hard to relate a topic with no face, and numbers and facts like data science and something like human development that introduces morals and ethics. All the statements made by the students are great points and leave a lot up for discussion.


Thursday 2021-09-09 16:11:54 by Milas Bowman

docker-compose: log tailing robustness fixes (#4943)

The core issue this aims to solve is that for a stopped container, docker-compose logs exits, and Tilt would continuously attempt to re-launch it, resulting in high CPU usage due to repeatedly invoking the command.

However, there were actually several intermingled issues that this exposed/worsened. These mostly tie back to the behavior of docker-compose logs: there is no built-in support for time filtering (unlike docker logs --since ...), and since it exits when a container stops, once invoked again, all the old logs will be returned again. This means if a container crashes/restarts, we'd re-emit all the old logs each time. Similarly, if there is a "job" container that naturally runs to completion, whenever re-triggered from the UI to run it, all the logs from prior invocations would be re-logged. (This could also mean when attaching on start up that you'd get a bunch of logs prior to the last [re]start, which isn't super useful either.)

To solve this issue, docker-compose logs is now invoked with the --timestamps argument, which prepends an RFC3339Nano timestamp before each log line. Of course, there are caveats: first, Compose v2 includes a whitespace before the timestamp and second, meta logs from Compose itself about container events do NOT have a timestamp. As a result, the attempt to parse/filter by the timestamp is very forgiving. (There's an additional wrinkle which is that evidently the StartedAt time reported by Docker for a container is not precise, so you can actually get logs with earlier timestamps!)

Tying back to the original problem that once docker-compose logs exits, Tilt tries to continuously re-invoke it, the watcher behavior has been altered to more closely mirror the Pod log stream watcher. When reading, if EOF is reached and/or the context is canceled, it stops. The former case means that docker-compose logs exited normally, indicating that the container is no longer running. The latter case means the manifest is being torn down, so we no longer want logs. Any errors during reading will cause it to retry, e.g. if docker-compose logs crashes or is terminated abnormally. (To support propagating process execution errors, the way that logs are read has been adjusted slightly, which actually resolves a race condition on reading logs and the docker-compose logs process terminating, which ensures that short-lived containers get all their logs!)

In the subscriber, when looking at already created watchers, if the watcher has stopped running, a new one will only be created if the corresponding container's start time is AFTER the last time it started. This is the key to not continuously starting up new docker-compose logs processes! (Finally.) We can't key off container ID because that's consistent across restarts, including both crashes and job containers that are run multiple times without changes.

Finally, there is a fix for the Docker Compose event watcher which was inadvertently broken (by yours truly) a while back. It was not actually ever running, so Tilt wouldn't notice external changes (e.g. docker-compose restart foo), which meant state was out of sync. This was not really noticeable before since we were insanely aggressive about re-establishing docker-compose logs sessions. After these fixes, however, it meant that any container events not initiated via Tilt would go unnoticed, so the logs would not get resumed, as the new container start time would never get picked up.

TL;DR Docker Compose logs should behave much nicer now both in terms of CPU usage as well as under various common scenarios like container restarts or job containers


Thursday 2021-09-09 16:50:28 by ariufl islam

Update README.md

Hi 👋, I'm Ariful Islam

A passionate backend and fronted developer from Bangladesh.

ariful007

ariful007

ariful7852

Connect with me:

ariful7852 arifulislam731 ariful33_33 programming school 2 asariful.islam.731 programming school ariful33 as_ariful ariful33 https://auth.geeksforgeeks.org/user/arifulpub143/profile

Languages and Tools:

android apachecordova c cplusplus firebase java kotlin mssql opencv python realm sqlite tensorflow

ariful007

 ariful007

ariful007


Thursday 2021-09-09 16:55:06 by Rick Farina (Zero_Chaos)

gentoo renamed beautifulsoup:4 to beautifulsoup4 because fuck you


Thursday 2021-09-09 18:01:26 by Eduardo Soares

Parse variable declarations / assignments

And fix the dumbest shit ever. Every token that consumed two chars are broken because I don't increment the index after chomping the second character.

Did a duck-tape fix but god i feel so stupid


Thursday 2021-09-09 18:02:54 by Marko Grdinić

"1:20pm. One more chapter of Kurogane no Mahotsukai and then I'll resume. I'll aim to put a dent in this book today. After that I should pursue the goal of getting more familiar with PPLs by working on one.

1:35pm. Just a bit more.

1:40pm. Let me start. As much as I fear the future, this is the path I've chosen. Maybe I'll end up wasting the next six years and achieve nothing. But if despair is all that awaits me I should meet it with courage.

The potential to make the agents that could make me money is there. Since I've failed at making use of NN-based agents I need to get closer to the fundamentals. After the fixed rule agents, the probabilistic ones are the next evolutionary step. I am not against getting down and dirty with writing my own rule based agents as long as they have the potential to reach the apex.

What I am going to do here, namely master PPL is extreme, but it is not more extreme that what I've done so far. It is about the par for me.

Let me do this thing. There is nothing beyond mastering PPLs. There is nothing beyond Bayesian rationality.

1:50pm. All I ever wanted from games was the ability to win at them. I never had an interest at making them. This ancient wish that I could not fulfil during my school days will one day be accomplished.

2:15pm. 116/218. I implemented something like this except monadically in the past. I won't focus too much on this book right now. This book is just so I get a gist of the field. After that is done I should immerse myself in programming. That is how I will attain mastery.

2:40pm. 128/218. My focus is low. Let me take a break here. I'll aim to finish the book by today so I can get concrete stuff done tomorrow.

3:10pm. 134/218. I am not really absorbing much from the book, in fact I seem to be skimming more than I am reading, but this is refreshing my memory of what I had done before. This thing here is just a genetic algorithm. That is what sequential MC is.

We will define a more realistic implementation of SMC in Section 6.7, once we have introduced an execution model based on continuations, which eliminates the need to rerun the first n − 1 steps at each stage of the algorithm.

Oh will that come into play? That is how I implemented it in the past.

3:10pm. 136/218. Variational inference is not something I've tried before. Figuring out how to make use of it should be a priority.

3:15pm. 137/218. VI is something I should master. But instead of pigheadedly focusing on the NN aspects like before, I should make probabilistic programming my focus. I should do it from the top down and then figure out how to optimize the model after that.

3:30pm. 141/218. At this point I am 2/3rds of the way into the book. I am not sure if I will manage it all today, but let me keep going. Putting in another 30 pages should be doable.

4:10pm. 156/218. This is interesting because the approach is viable, but it is not how I would've ever attempted CAPTCHA breaking. I would have gone with the NN approach, probably to my detriment.

158/218. Here is chapter 6. Let me take a break here though.

4:40pm. Ok, let me read chapter 6. It should not be long till I am done with the book.

As in Chapter 4, the inference algorithms in this chapter use pro�gram evaluation as one of their core subroutines. However, to more clearly illustrate how evaluation-based inference can be implemented by extending existing languages, we abandon the definition of inference algorithms in terms of evaluators in favor of a more language-agnostic formulation; we define inference methods as non-standard schedulers of HOPPL programs. The guiding intuition in this formulation is that the majority of operations in HOPPL programs are deterministic and refer�entially transparent, with the exception of sample and observe, which are stochastic and have side-effects. In the evaluators in Chapter 4, this is reflected in the fact that only sample and observe expressions are algorithm specific; all other expression forms are always evaluated in the same manner. In other words, a probabilistic program is a computation that is mostly inference-agnostic. The abstraction that we will employ in this chapter is that of a program as a deterministic computation that can be interrupted at sample and observe expressions. Here, the program cedes control to an inference controller, which implements probabilistic and stochastic operations in an algorithm-specific manner.

I wonder if all the functionality of Omega could be implemented in this manner. Maybe replace would be an exception. Still, it would be possible to the replace implicitly rather than explicitly.

4:55pm. 162/218. This is more elaborate than I thought it would be, but it is fine.

In the case of Turing (Ge et al., 2018), the implementing language (Julia) provides coroutines, which specify computations that may be interrupted and resumed later.

Since I will be doing work in Julia, I should check out Turing at some point.

In the HOPPL, we will implement support for interruption and fork�ing of program executions by way of a transformation to continuation passing style (CPS), which is a standard technique for supporting interruption of programs in purely functional languages

Oh yeah, that is what I am talking about.

5:35pm.

We assume the existence of an interface that supports centrally-coordinated asynchronous message passing in the form of request and response. Common networking packages such as ZeroMQ (Powell, 2015) provide abstractions for these patterns. We will also assume a mechanism for defining and serializing messages, e.g. protobuf (Google, 2018).

They really went quite further than I imagined. Personally, I'd just use the monadic patterns and leave it at there. These guys are talking about potentially distributing the computations across different machines. But admittedly, being able to easily scale to all available is one of the great advantages of randomized algorithms.

Bayesian inference is a natural fit for AI chips.

6pm. 184/218. Much like when I started work on Spiral in late 2016, I am not getting too much out of this book anymore than I did out of type systems books back then, but it has the purpose of cementing my role as an upcoming PPL creator and refining my resolve to do it. Let me finish this chapter and then I will plunge into it tomorrow.

6:05pm. > We look in a few different directions, beginning with two ways in which probabilistic programming can benefit from integration with deep learning frameworks, and then move on to looking at challenges to implementing Hamiltonian Monte Carlo and variational inference within the HOPPL and implementing expressive models by recursively nesting probabilistic programs.

Yeah, I am interested in this.

7pm. Done with lunch. Let me finish the last chapter.

7:35pm. 205/218. Here is the conclusion chapter.

However, at this point in time, probabilistic programming systems have not developed to such a level of maturity and as such knowing something about how they are implemented will help even those people who only wish to develop and use probabilistic programs rather than develop languages and evaluators.

It may be that this state of affairs in probabilistic programming remains for comparatively longer time because of the fundamental computational characteristic of inference relative to forward computation. We have not discussed computational complexity at all in this text largely because there is, effectively, no point in doing so. It is well known that inference in even discrete-only random variable graphical models is NP-hard if no restrictions (e.g. bounding the maximum clique size) are placed on the graphical model itself. As the language designs we have discussed do not easily allow denoting or enforcing such restrictions, and, worse, allow continuous random variables, and in the case of HOPPLs, a potentially infinite collection of the same, inference is even harder. This means that probabilistic programming evaluators have to be founded on approximate algorithms that work well some of the time and for some problem types, rather than in the traditional programming language setting where the usual case is that exact computation works most of the time even though it might be prohibitively slow on some inputs. This is all to say that knowing intimately how a probabilistic programming system works will be, for the time being, necessary to be even a proficient power user.

Mhhh...I am done for the day. There is no way around it, I just need to go forward. Tomorrow, I will start grinding my ML skill directly. I'll start work on my Omega imitation library.

7:55pm. https://news.ycombinator.com/item?id=28463482 How Doctors die. It’s not like the rest of us (2016)

I'll save this article for some future PL sub review. Earlier in the day, I checked out how much cryonic freezing costs and it is almost 100k dollars for just the brain. If I don't get any income from the PPL work so be it, but if I do I'll want to spend it to freeze my mom if her cancer gets bad."


Thursday 2021-09-09 18:52:03 by Snigdhadip Banerjee

Create Candy problem

It was one of the places, where people need to get their provisions only through fair price (“ration”) shops. As the elder had domestic and official work to attend to, their wards were asked to buy the items from these shops. Needless to say, there was a long queue of boys and girls. To minimize the tedium of standing in the serpentine queue, the kids were given mints. I went to the last boy in the queue and asked him how many mints he has. He said that the number of mints he has is one less than the sum of all the mints of kids standing before him in the queue. So I went to the penultimate kid to know how many mints she has.

She said that if I add all the mints of kids before her and subtract one from it, the result equals the mints she has. It seemed to be a uniform response from everyone. So, I went to the boy at the head of the queue consoling myself that he would not give the same response as others. He said, “I have four mints”.

Given the number of first kid’s mints (n) and the length (len) of the queue as input, write a program to display the total number of mints with all the kids.

Input: 14 4 Output: 105


Thursday 2021-09-09 19:38:57 by Shane F. Carr

Create notes-2021-09-09.md

2021-09-09 ECMA-402 Meeting

Logistics

Attendees

  • Shane Carr - Google i18n (SFC), Co-Moderator
  • Corey Roy - Salesforce (CJR)
  • Romulo Cintra - Igalia (RCA), MessageFormat Working Group Liaison
  • Thomas Steiner - Google (TOM)
  • Frank Yung-Fong Tang - Google i18n, V8 (FYT)
  • Long Ho - (LHO)
  • Zibi Braniecki - Mozilla (ZB)
  • Eemeli Aro - Mozilla (EAO)
  • Greg Tatum - Mozilla (GPT)
  • Yusuke Suzuki - Apple (YSZ)
  • Louis-Aimé de Fouquières - Invited Expert (LAF)
  • Richard Gibson - OpenJS Foundation (RGN)
  • Myles C. Maxfield - Apple (MCM)

Standing items

Status Updates

Editor's Update

RGN: No updates.

MessageFormat Working Group

RCA: We are working on a middle-ground data model that I hope will unblock the situation. EAO is focused on it, with Stas, Mihai, etc. EAO also put together an initial spec proposal.

EAO: I put together a spec outline, not a specific proposal. I think we will be able to merge it later this week.

Proposal Status Changes

https://github.com/tc39/ecma402/wiki/Proposal-and-PR-Progress-Tracking

FYT: Some more Test262 coverage is done. But we still need help.

RCA: I updated browser compat for locale info, documentation for hour cycle, etc.

FYT: Do we have an instruction guide about how to update MDN?

RCA: The process is moving quickly. It will be easier, though: you can just edit a Markdown file.

Pull Requests

Add changes to Annex A Implementation Dependent Behaviour

tc39/proposal-intl-locale-info#43

FYT: We added some changes to Appendix A. Does this look good? Do we have consensus to report this to TC39?

SFC: +1

RGN: +1

LAF: +1

Conclusion

Approved

Change weekInfo to express non-continouse [sic] weekend

tc39/proposal-intl-locale-info#44

FYT: Some regions have a non-contiguous weekend. This PR changes the data model to reflect that.

LAF: I wonder how this should be understood for all countries. In certain countries, the two "out of business" days may be not contiguous. Should we call it business day and non business day? Because "weekend" might not be the correct terminology.

SFC: Is there precedent in CLDR for using "business day" instead of "weekend"?

EAO: A quick Google search suggests that Brunei calls these days "weekend".

SFC: LAF, please open an issue on the repository to discuss the option name change.

SFC: Do we have consensus on the change?

LAF: +1

SFC: +1

Conclusion

Approved

Proposals and Discussion Topics

CollationsOfLocale() order

tc39/proposal-intl-locale-info#33

SFC: I feel that lists should define their sort order. This is similar to the plural rule strings discussion from a couple of months ago.

ZB: I represent the other side. I think developers should not be depending on the order.

LAF: (inaudible)

RGN: There is guaranteed to be an observable order. The question is whether that order is enforced across implementations, and if so, what should that order be?

FYT: Could we return a Set?

RGN: Sets also have observable order.

SFC: I propose we bring the meta question to TC39-TG1 as a change to the style guide.

LAF: +1 about order issue

FYT: OK

Conclusion

SFC to make a presentation to TC39-TG1 to establish a best practice in the style guide.

Define if "ca" Unicode extensions have an effect on Intl.Locale.prototype.weekInfo

tc39/proposal-intl-locale-info#30

LAF: My opinion about ISO-8601 is that it is not connected to any locale. Something like Gregorian is connected to a locale, and could carry week info. But ISO-8601 is international.

SFC: I think we should consult with CLDR.

FYT: This is about the first day of the week and minimal days in the week, not the weekend days. I personally believe that we shouldn't limit the extension; for example, a subdivision could have legislation to change this info.

LAF: In my opinion, the impact of saying whether Sunday or Monday is the first day of week, or on the minimal days, is to make a "week calendar": a calendar that lays out days in a week, dated by week number. I can imagine that some countries would like to distribute their own calendar, but I feel that there is a need among people to have the same week numbers. I don't know for sure where the correct place for this concept is.

ZB: This is inspired by the mozIntl API. The reason I needed it was for a general calendrical widget, the HTML picker. I think date pickers in general need this, not just calendar layout. I think it is a high-importance API.

SFC: I think the calendar subtag, or other subtags like the subdivision, should be taken into account.

FYT: I think we should take the whole locale to influence the result.

FYT: Do we need to make any changes to the proposal, and if so, what changes are needed?

RCA: No strong opinion on that, but concerned by the possible conflict with Temporal

Conclusion

SFC, FYT, and LAF agree that the whole locale (including extension subtags) should influence the weekInfo. FYT to share these notes with Anba and wait for follow-up.

JS Input Masking 🎭

FYT: Thanks for the discussion. (1) Some parts of what you proposed… if the formats are the same across different regions, it shouldn't be part of Intl. For example, if the ISBN format is the same across regions, it shouldn't be in Intl. (2) Is the name "input masking" correct? (3) A new item to consider is the postcode. That differs a lot around the world. The US has 5-4, India has 6 digits, Canada has special alphabetic rules. (4) It would be good to validate whether a string is a valid input. For example, maybe 13 digits is a valid ISBN, but not 14 digits. (5) A Googler on our team built libphonenumber, and it ended up being their full-time job for a while.

TOM: Postcodes are interesting. For validation, that's interesting and useful. Thanks for confirming that it is useful. I think it would make sense to have it in the proposal.

EAO: (1) Having built a library like this in the past, you start facing the issue of how to report errors on the input. So it becomes error reporting, but you need to do a best effort at the formatting while also reporting errors in a side channel. (2) Formatting while the string is being edited is just really hard; you should just wait until the field loses focus.

TOM: I agree that live updating the field is challenging. What you said about error reporting is interesting. Verification needs a lot of thought. I think it's something most developers probably want.

EAO: The biggest question is, how does the side-channel error reporting happen? Because that's an interesting question for a UI component like this.

TOM: It seems like it could hook into the mechanism for email verification that we already have. And for on-the-fly formatting, hopefully you could write the formatter so that it can listen to whatever event the developer thinks is the right event.

EAO: It's not just about a binary error. It's about providing more context to the error messages.

TOM: I think many things can be done. I'm new to this area, so I don't know the precedent. I'm looking for more experience.

ZB: Thanks TOM for the presentation. I've worked in this area before. I'm excited about the space, and I have a lot of questions. (1) Parsing is hard. There are a lot of questions here. What happens if they write LTR and RTL? What happens if they type in Arabic numerals? What if they use different kinds of separators? You quickly get into an uncanny valley. (2) You should also think about address formatting, which is like postcode and phone number. Where do you stop? (3) International placeholders is an interesting topic. How do you present a placeholder for a phone number? That really depends on the region. (4) I'm not sure that adding ??? is good for the scope of the spec. (5) About whether this belongs in a spec. It seems like a lot of UX teams will want to customize exactly what the output looks like: they agree on most of the format, but want to change a couple things. There's a good question about how much of this is i18n. (6) And finally, and this is the strongest point, if we were to specify what you are specifying, we would need to back it with a strong library. Because speccing it in ECMA-402 doesn't give us everything. So why not start with writing the foundational library, maybe one that can be used in many different programming languages, and then once you have the library, come back to ECMA-402 and ask whether we should bake it into the browser? That can then help us answer questions about whether the payload is sufficiently high such that it makes sense to ship it in the browser. So basically, I think we should start with a library. I think ECMA-402 is likely not the right place to start.

TOM: We could build a library, but we run the risk of making the "15th way of doing things" (in reference to the XKCD comic). Temporal started by making a polyfill, and is now integrating it into the browser. We already have a lot of input masking libraries.

RCA: I think this is really useful. (1) I'm concerned that the scope could be very large. (2) I'm concerned about what ZB said; organizations where I've worked have wanted to have their own way of doing things with slightly different interactions and so on. That formatter could be a custom thing for that institution. (3) Another thing is the interoperability with HTML. You could have an input credit card, the pattern, the validation, etc. (4) Highly interactive input fields could slow performance on low-resource devices.

TOM: For performance, the obvious tweak would be to do validation on the server.

YSZ: I think this is a super important part of the application. (1) Like FYT said, some of this data is not Intl data. (2) Phone validation is very complicated, like ZB said. We need to care about the UI; for example, inputting the credit card should trigger a numeric keyboard rather than an alphabetic keyboard. So it seems like we need . Did you consider starting there?

TOM: I thought about that, and I put it in the explainer as an alternative.

SFC: In order to avoid the "15th standard" issue, you should approach the industry leader in i18n standards, the Unicode Consortium, about making a working group to establish the industry canonical solution. ECMA-402 looks for prior art, and Unicode is the place we point to most often. This is similar in a way to the MessageFormat Working Group, which was chartered to resolve the competing standards for MessageFormat by bringing all the authors together.

TOM: Yeah, reaching out to Unicode and seeing if this has come up before would be a good option. As I've said, I had this in the String prototype, and then realized that this should maybe be Intl. Credit card numbers are generally not Intl, but phone numbers are. So creating that prior art makes sense.

ZB: I had discussed this a few years ago with Unicode. But with what SFC said, where there are multiple competing libraries, it means that we don't know what the answer is yet. Once we put it in ECMA-402, we won't be able to change it. When writing a library, we can make it and discard it with something better later. It makes sense that we need a place to assemble expertise from the many organizations. Maybe Unicode is the place. And only after we have that canonical implementation, we can evaluate whether it fits in ECMA-402.

MCM: The question about new input forms was raised earlier. Did you list use cases where form input types would NOT be sufficient for, where you need the JS APIs?

TOM: In a Node.js server, and you have a CSV file of unformatted phone numbers, you might want to format on the server. So it makes sense to have isomorphic Node and client-side behavior.

MCM: Has Node.js said that they need a standard for this? Aren't there already Node modules for this?

TOM: Deno is an interesting case. They've started implementing Web APIs like fetch. Programmers are used to the way Web APIs work, and they use them in Deno the way they expect them to work.

LookupMatcher should retain Unicode extension keywords in DefaultLocale

tc39/ecma402#608

GPT: Seems reasonable to me.

EAO: +1

CJR: +1

Conclusion

OK to move forward with this change; review the final spec text when ready.

ships the entire payload requirement

tc39/ecma402#588

Conclusion

FYT to follow up with Anba's suggestions on the Intl Enumeration API to harden the locale data consistency.

DateTimeFormat fractionalSecondDigits: conflict between MDN and spec

tc39/ecma402#590

GPT: It seems reasonable to match the Temporal behavior.

SFC: Do we want to add 4-9 now, or wait until Temporal is more stable?

Conclusion

Seems reasonable to move forward with a spec change. Still some open questions from Anba and SFC.

Presumptive incompatible change in future edition erroneuosly listed

tc39/ecma402#583

RGN: The spec version is immutable.

FYT: Is there a way to publish errata?

RGN: I don't think so… I do see some errata on ECMA International, but I don't see references to those errata.

SFC: The PR in question is tc39/ecma402#471. It was merged in January. I don't know why the change to Annex B made it into the edition, but not the normative change to numberformat.html.

FYT: The other issue is that we have long tables in the PDF that get cut off.

RGN: We're trying to raise funding to generate the PDF by a better mechanism.

Conclusion

Ujjwal to investigate.

Accept plural forms of unit in Intl.NumberFormat

tc39/ecma402#564

CJR: If we accept the plurals in RelativeTimeFormat, I can see a case for doing that also in NumberFormat.

SFC: There are basically 3 approaches. (1), we only accept singular units. (2), we accept plural forms for all units… stripping off the "s"? (3), only special-case duration units like days and hours.

EAO: Pluralization for all units is challenging. "inches", "kilometers-per-hour"

CJR: Having listened to your explanation, SFC, I agree with your assessment. Doing it on an ad-hoc basis is leading away from consistency.

RCA: +1 for not allowing plurals.

RGN: I share this opinion. Is there already a reference to CLDR, to prevent this from coming up again?

Conclusion

Stay consistent with CLDR, and add a normative reference to CLDR if there isn't already one.


Thursday 2021-09-09 19:47:59 by L4R5W

I had such good breakfast today. My girlfriend went and got some stuff from bakery and it was still warm inside :blobmelt:


Thursday 2021-09-09 20:54:10 by Karel Ha

Compete in Facebook Hacker Cup '21 Qualification Round

points: 64/100 pts ^_^ rank: 1011/12687 (out of ~34586 contestants)

  • 2nd among the 6 friends!
    • Barry had time during the night
    • could have overpassed him
      • had the idea for solutions already PERCENTILE: >92% (~ >97% overall! :-O)

Analysis

  • struggle w/ the new format of input files:
    • 1 validation file
    • password-protected zip file
      • timer starts when the password is retrieved -> better for bad connections
        • long download doesn't affect the 6-min timer
  • clunky to handle multiple I/O files => ADD VALIDATION/ FOLDER INTO TEMPLATE!

A1: AC

  • ad-hoc construction
    • try 2 ways:
      • convert all to the best vowel
        • all consonants are 1 edit step away
        • all other vowels are 2 edit steps away
      • similarly, convert all to the best consonant
  • hesitations/slowdowns:
    • no need to simulation conversions
      • i.e. block of code below // all to best{Vow,Cons} <- could have been calculated directly using nVowels, nCons and cnt => THINK OF SIMPLEST IMPLEMENTATION ALREADY WHILE CODING/BEFORE TYPING UP!!

B: AC

  • count X's of each row and of each col
    • if any contains O, set to -INF
  • find max such count
    • to compute minAdd=N-maxX;
  • count how many rows/columns have such max count
  • if (minAdd==1)
    • possible double counting
    • decrease by 1, when it's crossing of max column and max row

A2: AC

  • Floyd-Warshall for ASSP -> to get the edit distance between each node
  • brute force over all dest characters
    • compute the total cost of transforming each char
    • if any is unreachable to dest, leave cost to INF

C1: AC

  • graph of tunnels is a tree
  • DFS to recursively compute:
    • vector predecMasks(N);
      • bitset of predecessors from the root
    • vector gains(N);
      • total gains on the path from root
  • tunnelable between v1 and v2 iff
    • the only common ancestor is the root 1
    • otherwise can't pass back through the LCA
  • brute force over all pairs
    • that are tunnelable
      • subtract double counted gain of root C[0]
    • or that are identical vertex
      • i.e. root->vertex->tunnel back to root

Signed-off-by: Karel Ha [email protected]


Thursday 2021-09-09 20:54:17 by Chas

Update README.md

Basically a collection of Macros I use/have edited to better suite my game. Will make sure to give credit Below is crymic's guide to macro setup etc. if you find yourself here and are confused Dynamic Active Effects Midi-Qol Item Macro

About Time Combat Utility Belt Token Magic FX Macro Editor

Youtube Video on the subjects below.

FAQ Section

How to add a macro?

On the bottom hotbar of Foundry when in-game, click on an empty space. This will create and open a macro dialogue box. Here you can inject in the code. Make sure to toggle Type to script instead of chat.

Macro

How to execute a GM macro

To do this feature, you will need the module called The Furance, this will give you the ability to Execute Macro as GM.

Why execute a macro as a GM macro

Foundry does not allow players to modify other players or npcs, only themselves. The GM however has the power to do this. So to get around this, the player can callback to a macro which has permissions to run at a GM level. The only drawback with this is a GM must be logged in and present for this to work.

Macro Execute

This method of macro execution requires a macro stored on a users hotbar. To call to this macro from within DAE itself, you will need to do the following steps. First locate the item you wish to apply the macro to. Then drag it from the character to your items directory on the right side tool bar, and open it. At the top, click on the DAE button. Next, click on the little + symbol on the right hand side. Name the Active Effect whatever the item is, it will help reference if you need to later. Now click on the far right tab for Effects. Hit the + symbol to add a line. In the dropdown list on the left, at the absolute bottom is Macro Execute and select it. The second field should say Custom, in the value field we want to enter in Macro Name to reference the macro on the hotbar. Now in my notes I will often have @target mentioned as well or other variables. You will need to include those too.

So all together could be macro.execute custom "Rage" @target.

Active Effects DAE DE Macro

Item Macro

Now instead of click on your hotbar to add the macro, you can instead go directly to the item and edit it. Above the item, you'll see one for Item Macro. Once you've clicked it a macro window will open, here paste or type in your macro. When done, save it.

Item Macro

Now, we need to let DAE know to use this new item macro we just installed. Much like the previous steps above, instead of choosing Macro Execute, choose Item Macro. This time we are not going to reference a macro name, because we don't need one. Alternatively, we only need to reference the variables which are going to be passed to the macro instead. So, @target is all we need.

DE Item Macro

On Use

Recent changes to DAE, now makes all macros run automatically as macro execute status. This becomes a problem when involving dialog boxes. The dialog box will only show for the GM. To solve this issue, we need to use Midi-Qol's On Use feature.

New fields we can call upon..

actor = actor.data (the actor using the item)
item = item.data (the item, i.e. spell/weapon/feat)
targets = [token.data] (an array of token data taken from game.user.targets)
hitTargets = [token.data] (and arry of tokend ata take from targets that were hit)
saves = [token.data] (and arry of tokend ata take from targets that made a save)
failedSaves = [token.data] (and arry of tokend ata take from targets that failed the save)
damageRoll = the Roll object for the damage roll (if any)
attackRoll = the Roll object for the attack roll (if any)
itemCardId = the id of the chat message item card (see below)
isCritical = true/false
isFumble = true/false
spellLevel = spell/item level
damageTotal = damage total
damageDetail = [type: string, damage: number] an array of the specific damage items for the attack/spell e.g. [{type: "piercing", damage: 10}]

First go into the Midi-Qol's module settings and click on Workflow Settings. Down at the very bottom you will see add macro to call on use, check it and save.

on use

Now when looking at an item's details. At the very bottom, there is a new field called On Use Macro, here enter ItemMacro.

on use macro

Then add the macro as normal to Item Macro. Make sure to remove any DAE Item Macro calls.

Troubleshooting

Nothing happens when I use your macro

Always read the comments section of the macro at the start of it, they are noted with a "//". Usually I will mention what secondary macros are reqires in order to run.

// at the top of the macro there is always detailed information. Please read it.

Still nothing happens when I use your macro

Often some macros use callback macros. These need to be placed on the GM's hotbar and marked as Execute as GM. Some of these callback macros I have written, others are done by other authors. Check my Callback Macros folder for more details.

You can also disable both options inside Item Macro module. Some people have reported that it fixed the issue.

disable item macro option

I get an error when using the macro

Sometimes you clipped something at the bottom of the macro. Try using ctrl + a, ctrl + c to copy then ctrl + v to paste it. This will ensure you get everything.


Thursday 2021-09-09 22:21:18 by CKRAMPUST

Update README.md

0:00 - Fertile - Kenshi - https://youtu.be/JUwQid4l_NQ 0:37 - Rude Buster - Deltarune ​- https://youtu.be/GPL5Hkl11IQ 0:55 - Disco Necropolis (Graveyard Stage) - Skeleton Boomerang - https://youtu.be/F8Wv5xoTnz4 1:42 - Demon's Souls Soundtrack - "Demon's Souls" - https://youtu.be/dn7dW6xCMXY 1:54 - Disco Necropolis (Graveyard Stage) - Skeleton Boomerang - https://youtu.be/F8Wv5xoTnz4 2:20 - Ripple Field 3 - Kirby's Dream Land 3 - https://youtu.be/MnFdDNYnoNA 2:59 - Wolves - Kanye West - https://youtu.be/OZHjWc0Ssvk 3:01 - Persona 5 OST - Rivers In the Desert (Instrumental version) - https://youtu.be/vE5uEesakTU 3:19 - JJBA - Dark Rebirth (Theme of DIO) - https://youtu.be/_Qq1B5na--s 3:21 - Persona 5 OST - Rivers In the Desert (instrumental version) - https://youtu.be/vE5uEesakTU 3:23 - Daft Punk - Robot Rock - https://youtu.be/HdeYwObD-j4 3:25 - Cirno's Theme - Adventure of the Lovestruck Tomboy - https://youtu.be/Ku1YlgMhMng 3:30 - Persona 5 OST - Rivers In the Desert (instrumental version) - https://youtu.be/vE5uEesakTU 4:21 - Cosmic Necropolis (Special Space Stage) - Skeleton Boomerang OST - https://youtu.be/TeCG9KVYUns 4:52 - "Magic Spear I" - Ace Combat 7 - https://youtu.be/2o_XGQ3EcrI 4:59 - Jak2 - Escaping The Fortress - https://youtu.be/csxVDmMM8g0 5:07 - "Magic Spear I" - Ace Combat 7 - https://youtu.be/2o_XGQ3EcrI 5:17 - Metal Gear Rising: Revengeance Vocal Tracks - Red Sun (Instrumental) - https://youtu.be/Mg510Yq-2Qk 5:41 - Sonic Mania OST - Studiopolis Act 1 - https://youtu.be/NIWyZmFSep0

============================================================= Enjoy ^^


Thursday 2021-09-09 22:59:34 by thingpony

Fuck active turfs

Candles

AND FUCK YOU ACTIVE TURFS


Thursday 2021-09-09 23:32:32 by ShiftyRail

Fixes a few googles not working (#30644)

  • fixes night vision

  • God dammit fuck you lummox

  • vamp

  • oops


Thursday 2021-09-09 23:33:45 by Sam

Adjustments to match v0.2.8 closer

  • Boyfriend car & mom hair wave (idlePost)
  • Fading street lights in the Philly city area
  • Different animating type (twice per major beat hit)
  • Free play menu text in the top right scales properly (doesn't look weird anymore because I'm sorry but it did)

< 2021-09-09 >