3,044,017 events, 1,590,793 push events, 2,461,359 commit messages, 176,937,555 characters
update file: stylesheets/styles.css
This changeset represents the CSS changes as I found them as I sat down to work this morning. Judging from the timestamp on the file, the last changes were three days ago (at 2020-09-14 18:37:23 EDT).
The goal of the changes was to make better use of the screen realestate to allow more content to be displayed without scrolling. In particular, the code snippet sections previously had significant side scrolling which has now been all but eliminated.
The main change is that the prose text is both justified and limited in width (so the columns do not get too wide). The prose width is now constrained more than the code snippet width, which produces a more comfortable reading experience (code snippets are just a different beast, so different presentation rules should apply).
I'm still not happy with it. The layout only really works for a ~900px wide area. It kinda scales down okay (enough), but doesn't scale up well. In particular, now matter how wide your screen is the code areas will only take up so much space before a side-scrolling scrollbar is presented.
config.py
Holy shit I added gitignore.txt and its .gitignore! I'm a fucking idiot. Thanks to adding the actual gitignore
Create Minting-Mints.py
Problem statement:
It was one of the places, where people need to get their provisions only through fair price (“ration”) shops. As the elder had domestic and official work to attend to, their wards were asked to buy the items from these shops. Needless to say, there was a long queue of boys and girls. To minimize the tedium of standing in the serpentine queue, the kids were given mints. I went to the last boy in the queue and asked him how many minutes he has. He said that the number of minutes he has is one less than the sum of all the mints of kids standing before him in the queue. So I went to the penultimate kid to know how many minutes she has.
She said that if I add all the mints of kids before her and subtract one from it, the result equals the mints she has. It seemed to be a uniform response from everyone. So, I went to the boy in the head of the queue consoling myself that he would not give the same response as others. He said, “I have four minutes”.
Given the number of first kid’s mints (n) and the length (len) of the queue as input, write a program to display the total number of mints with all the kids.
Example-1
Input
4 2
Expected output:
7
Example-2
Input
14 4
Expected output
105
Update Minting-Mints.py
Problem statement:
It was one of the places, where people need to get their provisions only through fair price (“ration”) shops. As the elder had domestic and official work to attend to, their wards were asked to buy the items from these shops. Needless to say, there was a long queue of boys and girls. To minimize the tedium of standing in the serpentine queue, the kids were given mints. I went to the last boy in the queue and asked him how many mints he has. He said that the number of mints he has is one less than the sum of all the mints of kids standing before him in the queue. So I went to the penultimate kid to know how many mints she has.
She said that if I add all the mints of kids before her and subtract one from it, the result equals the mints she has. It seemed to be the uniform response from everyone. So, I went to the boy in the head of queue consoling myself that he would not give the same response as others. He said, “I have four mints”.
Given the number of first kid’s mints (n) and the length (len) of queue as input, write a program to display the total number of mints with all the kids.
constraints:
2<n<10
1<len<20
Example-1
Input
4 2
Expected output:
7
Example-2
Input
14 4
Expected output
105
attempt Install various features to Firebase plugin, uh werrored
I must install missing @WolfgangSenff Firebase features as all as possible.
uh,,, that's it? only Storage that didn't referred? let me check https://github.com/WolfgangSenff/GodotFirebase/blob/master/GDFirebase/Firebase.gd ... Yep, it's missing Storage. We also will need cloud storage.
right... https://github.com/WolfgangSenff/GodotFirebase/blob/master/GDFirebase/Firebase.gd . yeah missing. WHy no Storage?
oh wait, FirebaseStorage.gd
is HTTPRequest, okay, sure.
um, how I use Storage, help!
uh, perhaps it's incomplete. Idk where to go.
if this not going to create problem, I would like to contribute to the repository as well, so people will have the features for it.
idk how to say if "GDquest version is indeed entirely different" anymore.
added label "not a bug but I appreciate" in this GitHub repository in case if someone posted Issue but it's not bug, rather asking help for something didn't work either as it supposed to be, or lack of knowledge. learn the lesson, never say "it's not a bug", "issue is for bug", etc. that made some people from medium to high emotional sensitivity felt "stupid" & "very idiot" & etc. etc. Please just help & don't talk creeping to anything else.
Update LICENSE
Passé de la licence MIT à la licence Do What The Fuck You Want to
Added a bunch of SL changes so nothing breaks (tm)
I know I had these as commits to the shadowlands branch, but I have no idea how github works so i don't really know how to pull request it all over to the live version. If i were elon musk himself building and programming rockets that get sent to mars, i still wouldn't know how to operate this GOD DAMN SHIT SITE AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Merge: Performance improvements.
This patchset brings some performance improvements and the addition of the LZO-RLE algorithm to the kernel, also usable in zram (yup, tested, works but LZ4 is still ok for us).
The main performance improvement is for SWAP space: the locking has changed and the swap cache is now split in 64MB trunks. This gives us a reduction of the median page fault latency of 375%, from 15uS to 4uS, and an improvement of 192% on the swap throughput (this includes "virtual" swap devices, like zRAM!). The real world user experience improvement of this on a mobile device is seen after a day or two of usage, where it usually starts losing just a little performance due to the large amount of apps kept open in background: now I cannot notice any more performance loss and the user experience is now basically the same as if the phone was in its first 2 hours of boot life.
Other performance improvements include, in short:
UDP v4/v6: 10% more performance on single RX queue
Userspace applications will be faster when checking running time of threads
2-5% improvements on heavy multipliers (yeah, not a lot, but was totally free...)
Improvements on rare conditions during sparsetruncate of about 0.3% to a
way more rare around 20% improvement (that's never gonna happen, but there
is no performance drop anywhere).
Tested on SoMC Tama Akatsuki RoW
This was taken from Repo: https://github.com/sonyxperiadev/kernel PR: 2039 ([2.3.2.r1.4] Performance improvements)
Signed-off-by: sweetyicecare [email protected] Signed-off-by: Aoihara [email protected]
"8:45am. Tonight's sleep was trully ideal. Not only did I go to sleep at 11pm because I started reading Dendro vol 12, but I even spent some time lounging in bed today. Despite that it is this early. Tonight I feel asleep without any issue at all. Great.
8:50am. Ok, let me do my usual morning chilling and then I will start programming. Today I want to deal with validating the files and providing links to them. This won't be difficult.
9:10am. The feeling I have right now is good. Intense pressure is good while you are thinking about the design. But when you are establishing a routine, the feeling should be more like a walk in the park. You don't want to have the urge to do it all day like that, or to feel like your life is on the line.
Let me chill for a while longer and I will start.
9:55am. Let me finish the chapter of the crap I am reading and then I will start. I've wasted enough time. The first order of business will be modifying the config parser so it keeps ranges.
I'll also validate the files as well.
10am. Focus me. It is time to start. Things are pretty comfortable right now. All I need to do is dedicate a little bit of effort.
10:05am.
files : {|uri : string; range : VSCRange|} []
Let me start by making the relevant part of the schema this.
10:10am.
let range f p = pipe3 pos' f pos' (fun a b c -> ((a,c) : VSCRange), b) p
Let me make this helper.
type ConfigResumableError =
| DuplicateFiles of (string * VSCPos []) []
| DuplicateRecordFields of (string * VSCPos []) []
| MissingNecessaryRecordFields of string [] * VSCRange
| DirectoryInvalid of string * VSCPos
Wow, what the hell am I doing here. This should all be ranges.
type ConfigFatalError =
| Tabs of VSCPos []
| ConfigCannotReadProjectFile of string
| ConfigProjectDirectoryPathInvalid of string
| ParserError of string * VSCPos
| UnexpectedException of string
I should make some of these errors outright exceptions.
let range f p =
let decr (x : VSCPos) = {line=x.line-1; character=x.character-1}
pipe3 pos' f pos' (fun a b c -> ((decr a, decr c) : VSCRange), b) p
Let me do the range like this.
10:20am.
let pos (p : CharStream<_>) : VSCPos = {line=int p.Line - 1; character=int p.Column - 1}
Actually, let me do this.
10:25am. Focus me, focus. Stop surfing /pol/ on the side. Though this is my usual pace. Let me just iron out my will and get this done.
(file_hierarchy_list |>> Array.collect (flatten "file://")) p
Let me make this into a true uri.
let tab_positions (str : string): VSCPos [] =
Utils.lines str
|> Array.mapi (fun line x -> {line=line; character=x.IndexOf("\t")})
|> Array.filter (fun x -> x.character <> -1)
Modified this.
But wait, should I really be increasing by 1 here? Is VS Code 1 or 0 based? I think FParsec is 1 based.
let tab_positions (str : string): VSCRange [] =
let mutable line = -1
Utils.lines str |> Array.choose (fun x ->
line <- line + 1
let x = {line=line; character=x.IndexOf("\t")}
if x.character <> -1 then Some(x,{x with character=x.character+1}) else None
)
Let me do it like this.
Thankfully this is used in only one place. So it is fine.
11am.
let config (uri : string) spiproj_text =
try
let project_directory =
try DirectoryInfo(uri).Parent.FullName
with e -> raise' (ConfigProjectDirectoryPathInvalid e.Message)
This is not supposed to be directory info but file info.
const spiprojOpen = (doc: TextDocument) => { spiprojOpenReq(doc.uri.toString(true), doc.getText()) }
Hmmm...It is loose. I note here that I am just grabing the doc.uri
. I am sending the file uri, but the config parser is expecting the dir uri.
...Let me take a short break here.
11:30am. I am back.
let project_directory =
try DirectoryInfo(uri).Parent.FullName
with e -> raise' (ConfigProjectDirectoryPathInvalid e.Message)
Does this actually work?
let project_directory =
try FileInfo(uri).Directory.FullName
with e -> raise' (ConfigProjectDirectoryPathInvalid e.Message)
I'd expect something like this would be better.
I really do not need this exception. Let me this kind of error just break the plugin.
type ConfigFatalError =
| Tabs of VSCRange []
| ConfigCannotReadProjectFile of string
| ConfigProjectDirectoryPathInvalid of string
| ParserError of string * VSCRange
| UnexpectedException of string
Let me get rid of the errors without ranges.
type VSCErrorOpt = string * VSCRange option
Don't need this anymore.
I think that for things like compiler errors that have no location, I should use a special channel to send them instead of not including the range.
| DuplicateFiles of (string * VSCRange []) []
| DuplicateRecordFields of (string * VSCRange []) []
Let me get rid of the string here.
let project_directory = FileInfo(uri).Directory.FullName
Actually, I should not do this like so.
let project_directory = FileInfo(Uri(uri).LocalPath).Directory.FullName
Here it is.
11:50am. Uahhhh...Let me modify things a bit on the VS Code side. I have a serious problem, so let me just get that out of the way first.
const errorsSet = (errors : DiagnosticCollection, uri: Uri, x: [string, RangeRec][]) => {
const diag: Diagnostic[] = []
x.forEach(([error, range]) => {
diag.push(new Diagnostic(
new Range(range[0].line, range[0].character, range[1].line, range[1].character),
error, DiagnosticSeverity.Error))
})
errors.set(uri, diag)
}
Simplified this a bit.
type PositionRec = { line: number, character: number }
type RangeRec = [PositionRec, PositionRec]
type Errors = [string, RangeRec][]
type ClientRes =
{ ProjectErrors: {uri : string, errors : Errors} }
| { TokenizerErrors: {uri : string, errors : Errors} }
| { ParserErrors: {uri : string, errors : Errors} }
| { TypeErrors: {uri : string, errors : Errors} }
Hmmm...
type PositionRec = { line: number, character: number }
type RangeRec = [PositionRec, PositionRec]
type Errors = {uri : string, errors : [string, RangeRec][]}
type ClientRes = { ProjectErrors: Errors } | { TokenizerErrors: Errors } | { ParserErrors: Errors } | { TypeErrors: Errors }
Should have done this to begin with.
12:10pm. Agh, that very serious problem is something I am still thinking about. I just forgot about this.
During typechecking, I am just plowing things to the global env. How am I going to take that stuff and turn it into a module instead?
And even moreso, having directories for submodules is an extra complication.
12:25pm. Let me take a break here.
I have no idea how to deal with this. I am even thinking of getting rid of directories from the file list. I am going to have to change things.
12:55pm. Uhhh, need to take care of this. I am blocked again.
Let me have breakfast. After that I'll make sure that the latest changes did not break everything.
1pm. I am leaning towards adding another term and ty field to the TopEnv
. Yeah, it is annoying, but what am I going to do?"
Machine_Learning_Assignment_1
MATH2319 Machine Learning Semester 1, 2020 Assignment 1 Assignment Rules: Please read carefully!
- Assignments are to be treated as "limited open-computer" take-home exams. That is, you must work on the assignments on your own. You must not discuss your assignment solutions with anyone else (including your classmates, paid/unpaid tutors, friends, parents, relatives, etc.) and the submission you make must be your own work. In addition, no member of the teaching team will assist you with any issues that are directly related to your assignment solutions.
- All solutions must be provided in Python 3.6+ with all results documented in Jupyter Notebook.
- You must clearly show all your work for full credit. In particular, you need to clearly label your solutions with appropriate headings & subheading, lists, etc. Also keep in mind that just providing Python code will not get you full credit even if it's correct. You need to explain all your reasoning and document all your steps in plain English. That is, you must submit a professional piece of work as your assignment solutions.
- For solutions that are ambiguous, or solutions that are all over the place, you may receive zero points (even if it's correct!) as we have no obligation to spend hours and hours of our time to decipher your notebook.
- Once you are done, it is your responsibility to run your notebook and then save it as an HTML file before submission. Your solutions shall be marked exactly as they appear in your HTML file.
- You must submit a single file (in HTML format) that contains all your solutions to all the questions.
- For other assignment rules, please refer to this web page: https://rmit.instructure.com/courses/67061/assignments/424265
- It is your responsibility to follow any and all assignment rules stated in the above web page.
- Do not forget to include the Honour Code or your assignment shall not be marked.
- If you need to make any assumptions at any point so that you can continue for any question, please state these assumptions and clearly explain your reasoning.
- Suspected cheating incidents shall be reported to RMIT Student Conduct Office for possible disciplinary action. Question 1 (65 points) Data preprocessing is a critical component in machine learning and its importance cannot be overstated. If you do not prepare your data correctly, you can use the fanciest machine learning algorithm in the world and your results will still be incorrect. For this question, you will perform any and all data preprocessing steps on a dataset on the UCI ML Datasets Repository so that the clean dataset you end up with can be directly fed into any classification algorithm within the Scikit-Learn Python module without any further changes. This dataset is the Credit Approval data at the following address: https://archive.ics.uci.edu/ml/datasets/Credit+Approval The UCI Repository provides four datasets, but only two of them will be relevant: crx.names : Some basic info on the dataset together with the feature names & values crx.data : The actual data in comma-separated format Instructions:
- If you are having issues with reading in the dataset directly (which is most likely due to UCI's or your web browser's SSL settings), you can download the file on your computer manually and then upload it to your Azure project, which you can then read in as a local file.
- This is a very small dataset. So please do not perform any sampling.
- Make sure you follow the best practices outlined in the Data Prep lecture presentation (on Chapters 2 and 3) on Canvas and the Data Prep tutorial on our website.
- As a general rule, all categorical features need to be assumed to be nominal unless you have evidence to the contrary.
- This is an anonymised dataset. Thus, do not flag any numerical values as outliers regardless of their value for numerical features. As another hint, you won't have to look for outliers in categorical features either. However, you will need to look for some unusual values for both numerical and categorical features.
- For this question, you are to set all unusual values to missing values. Also, you are to impute any missing values with the mode for categorical features and with the median for numerical features. If there are multiple modes for a categorical feature, use the mode that comes first alphabetically.
- For the A2 numerical descriptive feature, you are to discretize it via equal-frequency binning with 3 bins named "low", "medium", and "high", and then use integer encoding for it.
- For normalization, you are to use standard scaling. You are allowed to use Scikit-Learn's preprocessing submodule for this purpose.
- The target feature needs be the last column in the clean data and its name needs to be target .
- You must perform all your preprocessing steps using Python. For any cleaning steps that you perform via Excel or simple find-and-replace in a text editor or any other language or in any other way, you will receive zero points.
- It's critical that the final clean data does not need any further processing so that it will work without any issues with any classifier within Scikit-Learn.
- Round all real-valued columns to 3 decimal places.
- Once you are done, name your final clean dataset as df_clean (if it's not already named as such).
- At the end, run each one of the following three lines in three separate code cells for a summary: df_clean.shape df_clean.describe(include='all').round(3) df_clean.head(5)
- Save your final clean dataset exactly as "df_clean.csv". Make sure your file has the correct column names (including the target column). Next, you will upload this CSV file on to Canvas as part of your assignment solutions. That is, in addition to an HTML file (that contains your solutions), you also need to upload your clean data in CSV format on Canvas with this name. Please do not ask teaching staff any questions about this Credit Approval dataset as we do not know anything more than what UCI already provides on their website. If you still need any help, please remember that you are allowed to search the Internet for generic questions, such as "how to change column order in Pandas" etc. Keep in mind that 99% of the time, a Google search will provide you a much faster response for your questions when compared to posting it on a discussion forum. If you run into any errors, the best course of action would be just to Google your error message. Good luck! For Question 2, please follow the instructions below:
- Textbook info can be found on Canvas at this link: https://rmit.instructure.com/courses/67061/pages/course-resources
- You must show all your calculations and you must perform all your calculations using Python. You must also document all your work in Jupyter notebook format.
- You may not use any one of the classifiers in the Scikit-Learn module. Likewise, you may not use any one of the preprocessing methods in the Scikit-Learn module. You will need to show and explain all your solution steps without using the Scikit-Learn module. You will not receive any points for any work that uses Scikit-Learn for Question 2. The reason for this restriction is so that you get to learn how some things work behind the scenes. But don't worry, you will be using Scikit-Learn quite a bit in subsequent assessments. Question 2 (35 points, 7 points for each part) Solve Chapter 5, Exercise 3 (all five parts) in the textbook, but instead of the Euclidean distance, use the Manhattan distance. All exercise parts must be solved with the Manhattan distance metric. www.featureranking.com
Create Machine_Learning_Assignment_3
MATH2319 Machine Learning Semester 1, 2020 ¶ Assignment 3 - No Competition Assignment Rules: Please read carefully!
- Assignments are to be treated as "limited open-computer" take-home exams. That is, you must not discuss your assignment solutions with anyone else (including your classmates, paid/unpaid tutors, friends, parents, relatives, etc.) and the submission you make must be your own work. In addition, no member of the teaching team will assist you with any issues that are directly related to your assignment solutions.
- For other assignment Codes of Conduct, please refer to this web page on Canvas: https://rmit.instructure.com/courses/67061/pages/assignments-summary-purpose-code-of-conduct-and-assessment-criteria
- You must document all your work in Jupyter notebook format. Please submit one Jupyter notebook file & one HTML file per question. Specifically, you must upload the following 2 files for this assignment: StudentID_A3_Q1.html (example: s1234567_A3_Q1.html) StudentID_A3_Q1.ipynb
- Please put your Honour Code at the top in your answer for the first question.
- Please make sure your online submission is consistent with the checklist below: https://rmit.instructure.com/courses/67061/pages/online-submissions-checklist
- For full Assignment Instructions and Summary of Penalties, please see this web page on Canvas: https://rmit.instructure.com/courses/67061/pages/instructions-for-online-submission-assessments
- So that you know, there are going to be penalties for any assignment instruction or specific question instruction that you do not follow. Programming Language Instructions You must use Python 3.6 or above throughout this entire Assignment 3. Use of Microsoft Excel is prohibited for any part of any question in this assignment. For plotting, you can use whatever Python module you like. Question 1 (100 points) This question is inspired from Exercise 5 in Chapter 6 in the textbook. Our problem is based on the US Census Income Dataset that we have been using in this course. Here, the annual_income target variable is binary, which is either high_income or low_income . As usual, high income will be the positive class for this problem. For this question, you will use different variations of the Naive Bayes (NB) classifier for predicting the annual_income target feature. You will present your results as Pandas data frames. Bayesian classifiers are some of the most popular machine learning algorithms out there. The goal here for you is then two-fold:
- To gain valuable skills on how to use popular variants of the Naive Bayes classifier using Scikit-Learn and
- Be able to identify which variant to use for a given particular dataset. Throughout this question, Use the "A3_Q1_train.csv" dataset (with 500 rows) to build NB models. Assume that the "A3_Q1_train.csv" dataset is clean in the sense that there are no outliers or any unusual values. Use accuracy as the evaluation metric to train models. NOTE: In practice, you should never train and test using the same data. This is cheating (unless there is some sort of cross-validation involved). However, throughout this entire Question 1, you are instructed to do just that to make coding easier. That is, for all relevant parts and tasks in Question 1, you are to train and test using the same data. Besides, NB is a simple parametric model and the chances that it will overfit for this particular problem is relatively small. Part A (10 points): Data Preparation TASK 1 (5 points): Transform the 2 numerical features (age and education_years) into 2 (nominal) categorical features. Specifically, use equal-width binning with the following 3 bins for each numerical feature: low , mid , and high . Once you do that, all the 5 descriptive features in your dataset will be categorical. Your dataset's name after Task 1 needs to be df_all_cat. Please make sure to run the following code for marking purposes:
pd.set_option('display.max_columns', None) print(df_all_cat.shape) df_all_cat.head()
for col in df_all_cat.columns.tolist(): print(col + ':') print(df_all_cat[col].value_counts()) print('********') HINT: You can use the cut() function in Pandas for equal-width binning. TASK 2 (5 points): Next, perform one-hot-encoding (OHE) on the dataset (after the equal-width binning above). Your dataset's name after Task 2 needs to be df_all_cat_ohe. Please make sure to run the following code for marking purposes: print(df_all_cat_ohe.shape) df_all_cat_ohe.head() You will provide your solutions for Parts B, C, and D below after you have taken care of the above two data preparation tasks. HINT: For this Part A, please make sure to follow data preparation best practices that you have learned in course. MARKING NOTE: If your data preparation steps are incorrect, you will not get full credit for a correct follow-through. Part B (15 points): Bernoulli NB In Chapter 6 PPT Presentation, we recently added some explanation on a useful variant of NB called Bernoulli NB . Please see the updated Chapter 6 PPT Presentation on Canvas. For this part, train a Bernoulli NB model (with default parameters) using the train data and compute its accuracy on again train data. Official documentation on Bernoulli NB: https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.BernoulliNB.html Part C (15 points): Gaussian NB For this part, train a Gaussian NB model (with default parameters) using the train data and compute its accuracy on again train data. As you know, the Gaussian NB assumes that each descriptive feature follows a Gaussian probability distribution. However, this assumption no longer holds for this problem because all features will be binary after the data preparation tasks in Part A. Thus, the purpose of this part is to see what happens if you apply Gaussian NB on binary-encoded descriptive features. Official documentation on Gaussian NB: https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html Part D (20 points): Tuning your Models In this part, you will fine-tune the hyper-parameters of the Bernoulli and Gaussian NB models in the above two parts to see if you can squeeze out a bit of additional performance by hyper-parameter optimization. TASK 1 (5 points each): Tuning: Fine-tune the alpha parameter of the Bernoulli NB model and the var_smoothing parameter of the Gaussian NB model. TASK 2 (5 points each): Plotting: Display a plot (with appropriate axes labels and a title) that shows the tuning results. Specifically, you will need to include two plots: One plot for Bernoulli NB tuning results One plot for Gaussian NB tuning results You must clearly state the respective optimal hyper-parameter values and the corresponding accuracy scores. There are no hard rules for hyper-parameter fine-tuning here except that you should follow fine-tuning best practices. HINT: You can perform these fine-tuning tasks in simple "for" loops. Part E (35 points): Hybrid NB In real world, you will usually work with datasets with a mix of categorical and numerical features. On the other hand, we covered two NB variants so far: Bernoulli NB that assumes all descriptive features are binary, and Gaussian NB that assumes all descriptive features are numerical and they follow a Gaussian probability distribution. The purpose of this part is to implement a Hybrid NB Classifier on the "A3_Q1_train.csv" dataset that uses Bernoulli NB (with default parameters) for categorical descriptive features and Gaussian NB (with default parameters) for the numerical descriptive features. You will specifically train your Hybrid NB model using the train data and compute its accuracy on again train data. This part will require you to think about how NB classifiers work in general and how Bernoulli and Gaussian NB classifiers can be combined via the "naivety" assumption of the Naive Bayes classifier. Part F (5 points): Wrapping Up For this part, you will summarize your results as a Pandas data frame called df_summary with the following 2 columns: method accuracy (please round these accuracy results to 3 decimal places) As for the method , you will need to include the following methods in the order given below:
- Part B (Bernoulli NB)
- Part C (Gaussian NB)
- Part D (Tuned Bernoulli NB)
- Part D (Tuned Gaussian NB)
- Part E (Hybrid NB) After displaying df_summary, please briefly explain the following: (i) Whether hyper-parameter tuning improves the performance of the Bernoulli and Gaussian NB models respectively. (ii) Whether your Hybrid NB model has more predictive power than the (untuned) Bernoulli and Gaussian NB models respectively. www.featureranking.com
use my commit messages suck, trying to get better at it. A lot has been modified. Got copy into a Quay server to work, Docker-distribution (registry) is very forgiving apparently. While getting copy to registry to work correctly found and fixed several bugs, a deletion bug for indicator, time parse bug for token auth in Client, and most likely more I can't remember
Gift Rift
Our Chef is very happy that his son was selected for training in one of the finest culinary schools of the world. So he and his wife decide to buy a gift for the kid as a token of appreciation. Unfortunately, the Chef hasn't been doing good business lately, and is in no mood on splurging money. On the other hand, the boy's mother wants to buy something big and expensive. To settle the matter like reasonable parents, they play a game.
They spend the whole day thinking of various gifts and write them down in a huge matrix. Each cell of the matrix contains the gift's cost. Then they decide that the mother will choose a row number r while the father will choose a column number c, the item from the corresponding cell will be gifted to the kid in a couple of days.
The boy observes all of this secretly. He is smart enough to understand that his parents will ultimately choose a gift whose cost is smallest in its row, but largest in its column. If no such gift exists, then our little chef has no option but to keep guessing. As the matrix is huge, he turns to you for help.
He knows that sometimes the gift is not determined uniquely even if a gift exists whose cost is smallest in its row, but largest in its column. However, since the boy is so smart, he realizes that the gift's cost is determined uniquely. Your task is to tell him the gift's cost which is smallest in its row, but largest in its column, or to tell him no such gift exists.
Fixed error in test
bitch ass travis fuck you EC, EW
OWN OLD STUFF
copy nordic_ergo
my layout from kiibohd firmware
remove media layer and stuff
fix fuckups and add ESCCTRL
better visualizer
Fix indentations and alignments
Func layer colors have preceden
Func layer more important -> higher
gaming layer and auto shift mode
autoshift tuning
browser controls to func layer
fix KC_BLSL -> KC_EQL
disable auto shift
MT(Shift, Enter)
ENTLGUI and SPCLSFT mod taps for right hand cluster
make left alt insert space on tap
remove ignore_mod_tap_interrupt completely
use the shortcuts for common mod taps
tapping term per key
change to _pretty macro
remove useless rgb custom keycode
remove useless eprm keycode
remove useless BASE layer toggle
remove useless action_get_macro function
ignore_mod_tap_interrupt for only some of my MT keys
Fix MT keys names. Make left LGUI MT(LGUI, BSPC)
Ignore MT Interrupts: Allow QMK to continue.
Allow QMK to process other possible actions after the ignore_mod_tap_interrupt function.
MT(left shift, ISO/)
Default numbers for numpad. No need for NLCK.
vim folding and markers
LT(symbol layer, enter)
add missing symbol keys and add visualizer for the layer
symbol layer color change
fold markers for functions and removal of useless functions
fill symbol layer ascii art and remove RALT(KC_1) as it does nothing
Smaller tapping term for MT(Shift, Space)
fix symbol layer lsft(grave) -> ralt(grave)
swap hands buttons
default tapping term for SYMBENT
Move media buttons to right hand
Move F-keys to left hand. No need for number keys anymore
a bit shorter tapping term
some initial stuff
gergo refactor
Move a lot of stuff into userspace. Copy rest of the layers from ergodox.
tap dance home end
enable tap dance for gergo and place HOMEND
more tap dancing
more compact binary
use new ignore_mod_tap_interrupt_per_key instead of own func
gergo layout changes
put tapdances into own file. add advanded left alt dance
add ´` to symbol layer
ccls language server file
some layout changes
Move RALT to SYMB layer and put RSFT where RALT was. R20 button is now fixed and has BSPC in it like it was supposed to. But it seems that I'm already used to BSPC being under left thumb. :D Added a layer for RESET and maybe some other stuff in the future.
just remove the BSPC from the old pos. no need for it
add tilde to SYMB layer
tab+[QWERTYUIOP] combos for alt+num and left alt is now just MT
make STUF layer the 15th layer
remove combo enum. add mute combo. add button for toggling combos
add sleep button to STUF
tapdance: don't press shift space...
add alt+0 combo for tmux
initial thumbstick mod stuff
rework all user config to work with ergodox too
ccls with everything enabled
show couple of combos in the ascii art
leader key
ccls: no missing braces
change symb and shift on right hand. reorder F buttons on left hand
move RALT under FUNC
move stuff around a bit
no more tapdance.h
more combos, combo ascii map, faster combo_term
MODS to home row, <> combos to lower row, flip get_ignore_mod_tap_interrupt
remove home row mods, they interfere too much :D
add leader button for left hand
few more combos
move RALT right a bit
slash combo to jk
just a few home row mods: ctrl a, ctrl ö and ctrl ä
put combos into their own file with fancy macros
even smaller combo_term
move combos.def under userspace
move basic keymaps under userspace so we can copy them to all keyboards
and use them on ergodox
use keymap wrappers in gergo and make everything use the nordic keys
use swedish instead of nordic (deprecated)
add kc_dot under SYMB for ergodox and fix formatting
ergodox: put SYMB and FUNCL at the same pos as gergo has
formatting
", ( and ) combos
tap dancing, combos, random tuning
leader keys for resetting gergo
gergo COMBO_ALLOW_ACTION_KEYS
gergo leader timeout 500
more combos now that combos with modtaps are fixed
use new COMBO_VARIABLE_LEN
Layers to enum
do combos the enum way so that process_combo_event can be used
MT_S is shift
More combos. No permissive hold for any of the home row MTs
more combos
left hand number combos and move wheel combos a bit
MT_G as LGUI. Small experiment with LCTLBSP under left thumb
make gergo and ergodox configs the same
group close by combos
percent combo with MT_G
move LGUI from thumb to ESC. Thumb will have CTRL at some point
CODEBLK keycode and code
Make key wrappers for row 4 key pair.
Add FUNCL row 4 wrappers and thumb key pair wrappers
fix gergo qwerty_l4
Fix breaking changes May 2020
Change alt+n combos to thumb plus top row
New mouse layer. Remove mouse combos
Change mouse layer button to a LT
Remove number combos. Couldn't use them... :D
Wrappers from stuff layer. Move combos more center.
Add combos for outer column keys.
Rename MT_ -> MY_. Move rename thumb keys
stuff
MT ALT to minus key and some combo.h formatting
Layer combo testing.
Test mod combo. Change leader key stuff to combos.
Lots of combo related stuff
Layer combos for MOUS and SYMB
Gergo thumbstick!!
Better mouse handling.
Better mouse handling around deadzone.
mouse: log isn't that useful. Linear is much better.
Random config
Make browser combos thumb+alpha combos
Move combos around. BSPC and DEL combos for right hand
TAB combo upper left, BSPC combo upper right
Ergodox gaming layer tuning
New combo for question mark.
Just remove the old question mark combo
Combo testing for precondition
Slightly bigger deadzone for gergo thumbstick
Fix vim combos firing twice.
backuppia
Some combo changes.
lotsa shite
Fix mouse layer. Move Super to left thumb and mouse1 under a combo.
Remove thumbstick from gergo
script: Die class
"Holy Chits" is a joke based on the mechanism of "chits" that were used for random number generation in table-top games back before polyhedral dice (that weren't the traditional cube) were commonplace. They worked as basic as you'd expect: you'd photo-copy a page out your gamebook, snip out the small grid, sort them into little jars or bowls based on the number range on the back, and draw a number.
Nowadays people really like the clatter of their dice on the table. Especially if they're the ones from your Pathfinder group that had a tendency of slinging spells around that called for at least ten dice. But Holy Chits! is a project of discretion; it isn't necessarily a throwback to the use of chits, but instead it serves as a pre-rolled list of numbers to print out and tuck in your binder in case you lent your dice to your friends who don't have any, forgot to bring your set, or are not in a situation where pulling out dice and rolling them would be acceptable.