4,725,362 events, 1,116,774 push events, 1,604,889 commit messages, 102,597,139 characters
Merge pull request #122201 (black -> pyflakes)
This switches the linting of the NixOS test driver script from Black (which is a code formatter) to PyFlakes. Contrary to Black, which only does formatting and a basic syntax check, PyFlakes actually performs useful checks early on before spinning up VMs and evaluating the actual test script and thus becomes actually useful in development rather than an annoyance.
One of the reasons why Black has been an annoyance1 was because it assumed that the files that it's formatting aren't inlined inside another programming language.
With NixOS VM tests however, we inline these Python scripts in the testScript attribute. With some of them using string antiquotations, things are getting rather ugly because Black now no longer formats static code but generated code from Nix being used as a macro language.
This becomes especially annoying when an antiquotation contains an option definition from the NixOS module system, since an unrelated change might change its length or contents (eg. suddenly containing a double quote) for which Black will report an error.
While issue #72964 has been sitting around for a while (and probably annoyed everyone involved), nobody has actually proposed an implementation until @roberth did a first pull request2 yesterday which added a "skipFormatter" attribute, which contrary to skipLint silently disabled Black.
This has led to very legitimate opposition3 from @flokli:
As of now, this only adds an option that does exactly the same as the already existing one.
black does more than linting, yes. Last September it was proposed to switch from black to to a more permissive (only-)linter.
I don't think adding another option (skipFormatter) that currently does exactly the same as skipLint will help out of this confusion.
IMHO, we should keep skipLint, but use a linter that doesn't format, at least not enforce the line length (due to the nix interpolation we do).
This was written while I was doing an alternative implementation and pretty much sums up the work I'm merging here, which switches to PyFlakes, which only checks for various errors in the code (eg. undefined variables, shadowing, wrong comparisons and more) but does not do any formatting.
Since Black didn't do any of the checks performed by PyFlakes (except a basic syntax check), the existing test scripts needed to be fixed.
Thanks to @blaggacao, @Ma27 and @roberth for helping with testing and fixing those scripts.
Revert "nixos/tests/docker-tools*: remove useless formatter"
Annoyed with the interference of the python formatting of generated code (see #72964), I took matters into my own hands as maintainer of dockerTools.
Afterwards, I've created a PR, hoping to unstuck the discussion.
@aszlig took notice and thanks to his python ecosystem knowledge, the testing efforts of @blaggacao and @Ma27, and a sense of shared suffering and comraderie we were able to change the situation for the better in #122201.
Now, we have a proper linter that actually helps contributors, so it's time to turn it back on again.
I'm glad we could make it happen this quickly!
Thanks!
This reverts commit 4035049af3e45554ffc4d8b4c30fd43ae9cd328a.
fixed a fgucking stupid ass bug that was cause by one stupid ass fucking line of shitty vscript /sqrl so yeah also fuck sv_cheats for some reason it broke everything anyway you want to hear my life story?
Nanoblood additions
Normal effect at all times (5u/10% blood). First blood threshold causes oxygen damage capped to 20, FYI.
Double effect if you're below the second blood threshold (336 blood_volume) (5u/20% blood) Second blood threshold causes oxygen damage capped to about 50 and starts toxin damage, FYI.
If below the third blood threshold (224 blood_volume), sets blood level to the threshold +1, puts the target to sleep for 10 seconds, and adds 25 toxin (the chemical). There's no way in hell you were conscious with all your blood missing, so I don't care. Third blood threshold causes uncapped oxygen damage and increases toxin damage significantly, FYI.
A single 5u pill applied to someone below 224 blood_volume will end up at ~340 blood_volume, which is fully out of danger. This should significantly improve nanoblood's perfomance VS worst-case idiots who had all their blood fall out, making it a competitive buy.
Project-Akhir_Kelompok2_ARM2
Practicing Python is Lovely to Do and for Mr. Hilmi We Love You so Much <3
sched/core: Fix ttwu() race
Paul reported rcutorture occasionally hitting a NULL deref:
sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM
Debugging showed that this only appears to happen when we take the new code-path from commit:
2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")
and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:
c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
this would've unconditionally hit:
smp_cond_load_acquire(&p->on_cpu, !VAL);
and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.
The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.
Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.
The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):
X->cpu = 1
rq(1)->curr = X
CPU0 CPU1 CPU2
// switch away from X
LOCK rq(1)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 9
switch_to(Z)
X->on_cpu = 0
UNLOCK rq(1)->lock
// migrate X to cpu 0
LOCK rq(1)->lock
dequeue_task(X)
set_task_cpu(X, 0)
X->cpu = 0
UNLOCK rq(1)->lock
LOCK rq(0)->lock
enqueue_task(X)
X->on_rq = 1
UNLOCK rq(0)->lock
// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
X->on_cpu = 1
UNLOCK rq(0)->lock
// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb(); // wake X
ttwu()
LOCK X->pi_lock
smp_mb__after_spinlock
if (p->state)
cpu = X->cpu; // =? 1
smp_rmb()
// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 0
if (p->on_rq)
smp_rmb();
if (p->on_cpu && ttwu_queue_wakelist(..)) [*]
smp_cond_load_acquire(&p->on_cpu, !VAL)
cpu = select_task_rq(X, X->wake_cpu, ...)
if (X->cpu != cpu)
switch_to(Y)
X->on_cpu = 0
UNLOCK rq(0)->lock
However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.
(Most of the previous ttwu() races were found on very large PowerPC)
Nevertheless, this fully explains the observed failure case.
Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: baalajimaestro [email protected]
"9:30am. I go up 10m ago and am chilling. Let me do that for a bit longer and then I will start.
9:50am. Let me start.
10am.
inl policy player_state game_state pid actions =
(log_prob_from_sample (1 / f64 (length actions)), sampling.sample actions), fun r => r
How do I make a batch version of the uniform player? Also I just now realized that pids can be different between samples here!
That means, I can't use anything but self play in parallel training.
player_state game_state pid actions
, these 4 args should be in an array. And the output should be an array of (log_prob_from_sample (1 / f64 (length actions)), sampling.sample actions), fun r => r
. No way around this.
It is a pity I can only do self play training here, but this will have the effect of making things even simpler for me.
...Though there is an option to partition the data by the ids. That would allow me to use two separate players.
Let me take a short break here.
10:25am. Let me resume.
10:30am. I am thinking of how I will deal with r2
's in the neural net. Instead of indexing into it, I will just give it as it is.
For both the policy net, I think I'll just make the pid a feature. Otherwise in HU NL Holdem, I won't be able to deal with cutoffs properly.
10:35am. I said it was to contribute to my own side, but my internal story is changing again.
I am simplifying my image. In truth programming does not matter for its own sake.
I am imaging picking a real world goal, and imagining how having an AI agent guide me would help.
Imagine I was playing chess against Magnus Carlsen for a large sum of money. If it was just me, I'd be quite nervous, but if I had AlphaZero giving me tips on the side, I'd be a lot more confident.
The same would go with poker.
Rather than automating everything off the bat, I'll train the agent, and play it for a while myself with him as the advisor. The same would go if I were cultivating the arts. I should not rush through this.
Right now it is May 2021, and I am nothing more than a beginner. I bet plenty of people could have picked my own path in 2015, but not have gotten caught up in doing random courses and then a ML library and a programming langauge. Had they done that, they'd be in a far stronger position than I am now.
Spiral will pay for itself long term, but for now forget those accomplishments.
I am like a millionaire who has lost it all. Forget the past luxuries. It is time to build a new empire.
10:50pm. Programming is the kind of activity where having high pride gives great benefits. But cultivating agents might be better served with humility.
Pride...I am such a natural channeler of it. I started off good at programming, and then I never compromised even once in the quest for omnipotence despite all the hardship.
I never actually saw a point to humility. Some people think it is a virtue, but I discarded that as a cultural thing with no basis in reality.
Still, I can see clearly that pride is the wrong thing to channel in the face of ML's winds of uncertainty.
The demands on programming brilliance are not high. I do not think that even excellence in mathematics is the determining factor here, since we are all stealing insights from everybody else.
Rather than demonically trying to squeeze out the insight from the aether, the challenge here is to give yourself to the machine.
I want to be great. I want to be king. But the game here is to be a beggar and a philantropist. For the sake of your own side that is.
11:10am. I need to be more flexible. Rigid trees cannot resist the wind.
I am thinking...
Obviously, the uniform player is not the one that will give me trouble.
But I am thinking about the linear player. I haven't expected the pid thing to be an issue. I've thought about the serialization in the outputs.
But should I partition the game by pids? Could I do it with just a single player?
11:30am. No. Things are a lot more complicated than I originally envisioned.
What I envisioned was a set number of threads that terminate at different times, but that I am thinking about it more in depth I see that the situation is not only that, but that pids could potentially be different.
Suppose I used a single player for both the players and put in the pid as a feature - that might give me trouble when start using RNNs. I do not want to commit myself to feedforward nets completely just yet.
11:35am. Hmmmm, no if I did restrict myself to a single player, that would make things simpler.
11:50am. This is so complicated. Every little architecture change that I can think of, from going to feedforeward to RNNS to transformers would require a change in the training regime.
11:55am. This is rough on me. This is the same thing as me not being able to do training for tabular players.
Well, for tabular players, it is not like it specifically can't be done.
12pm. Input filtering is rough.
It would be more efficient to add some TD learning to the mix.
12:20pm. Agh, let me take a break here. It is time for that anyway.
The problem isn't really implementing the specific thing I want here, right now.
The problem is that for every different learning algorithm and NN family, I would need to implement a different training regime.
Just like with going from single trace to parallel right now, things feel off for me. I know how to do it, but I am having trouble finding abstractions opportunities. The patterns are too complex for my functional magic to work. It seems I'd be better off to do an unique implementation for everything.
12:25pm. I have only 30 chapters left and then I will finally be free from Reverend Insanity. I thought it had 2400 something, but I am only 30 chapters away from finishing it.
https://www.youtube.com/watch?v=CY_LEa9xQtg Risto Miikkulainen: Neuroevolution and Evolutionary Computation | Lex Fridman Podcast
I want to watch this interview since it is on evo computation.
...
I think it is fine if I spend my time thinking about the programming problem in front of me before committing.
If I have to implement a different training function for every architecture and learning algo, so be it. But I should not jump into it before I feel that this is the case.
There is a lot of work here, but there isn't a huge unbounded amount of it like when I worked on an ML library and the Spiral language. All this will go quickly once I catch my bearings."
happy with all the functionality and the speed
- Several different ways to view all the git commits and of course fugitive to run all git commands in the browser
- a nice selection of themes, and they're that I can cycle through when I get bored of the same old (but where to keep the fonts???
- my first vim scripts, and the lightswitch for daytime to nighttime coding is quite useful for me. (I love that I don't get blasted by a white LCD in the middle of the night by opening a terminal
- I'm onto my second round of plugin selections for tag analysis, and I think Vista.vim will be fun to use because it supplements the tags with function listings from the langage servers. So that will make learning about tags a little easier.
- LSP server and client, I was running CoC, but found my understanding of how to manage its keybindings, operations, and use to be a bit limited. Now I've switched over to the internal neovim LS client, and I think I'm probably goig to enjoy working on complex configs in lua to vim script. And, that the neovim LS client is a little behind in features will help me really learn what the language server is doing rather than what the ui is doing
- syntax highlighting for days, and most of it's turned off. Polyglot has handled everything I've thrown at it (and I'm not fery far out there... so of course it would. I'll turn on more stuff when I need it.
- another first for me is undo tree. It looks like it's gonna save my ass a few times. And I like how detailed teh configuration is, I need to read through how those feature flags are managed
- a whole lot of text editing tools, spelling, style linting, quick persistent scratchpad..... And I feel way more organized and in control of what my text editor is doing thatn I have in a long time.
There are a few things I haven't tested yet ;{
- vim-thematic - I think it will come in handy as my theme collediton grows and I become more picky
- Ctags - I think I have a faint understanding of what they are and how they are useful. But I didn't know nearly enough to properly operate the code scanners, and, I unininstalled that stuff.
fuck you fuck php i dont wanna use this shit ever again
holy fucking shit
- removed spotify integration because gay
- added PVP INFO MODULE WOOOOO
- reformat
- removed unused classes
The Great Refactoring
Original design was to have a single resource type, use a vtable variable with mostly noops except for initializer, and the initializer would replace the vtable with implementations on success.
Lots of downsides:
- Lots of boilerplate code for the same thing, over and over
- Changing values around for the vtables in three places - initial constructor, initializer and deinitializer
- a validity function that is mostly unused, since we know uninitialized values are invalid and initialized values are valid
Basically no upsides I can think of.
New design, csalt_resources have on function - csalt_resource_init - and it returns a new type, csalt_resource_initialized*. Returns a valid pointer on success, or NULL on failure. Initialized types are csalt_stores with one more function - deinitialize.
- Much less boilerplate and much simpler code
- Since we know we can't read/write/etc. an uninitialized resource, we just remove the possibility
- Since we can't initialize an initialized resource, we just remove the possibility
- I seriously can't emphasise enough how much simpler the code is and I was getting sick of repeating the same code over and over and
Listen, you can whine about technical debt or you can grow a spine and fucking fix it because YOU'RE THE ONE IN CHARGE HERE
YOU'RE THE MASTER NOW
Sorry, taking out my frustration about previous employers.
small change in rest api, because time on server is diff So time on server is running UTC for weird reason. Fuck you amazon ec2 So what I do is basically just take the incoming starting date and schedule it 1 h from current date time Your branch is up to date with 'origin/master'. Changes to be committed: modified: rest-api/routes/scheduleItem.ts modified: scheduler/index.ts modified: scheduler/scheduler.ts
Create LICENSE
change to GNU GPL so further fork must be Open Source
However though, we made special case in case Kade and / or Ninja would like to pull back mods back to theirs again. Yes you guys and all of you gamers are allowed to pull back Perkedel Mods and Custom weeks., even make it canon which is much better yay, with terms:
- Expressly tell that the Perkedel weeks are CC4.0-BY-SA, belong to JOELwindows7 and involved contributors if available. Take a look at some example Open Source notice on some open source softwares, let's say, in your Android phone, there is
about, Law info, View License of Sources
. yeah that acknowledgement thingy. Simply says it's a credit, and these weeks are licensed different which for mine is CC4.0-BY-SA. - Expressly tell that the Perkedel mods are GNU GPL. v3 okay that might be confusing. but the idea is, JOELwindows7 may make enhancement. and those enhancement that belongs to him are GNU GPL v3. Well, you still are allowed to pull those back. and part of these are... UGH, damn let me think for this one. maybe, that... okay shall I allow them to pull back as Apache?.. uhhhh idk. I need to think.
"2pm. Let me do the chores here.
2:10pm. I did not have to do much today. Okay, let me stop RI for the time being.
I've come to a decision, I am going to give up on input filtering. It seems like a nice optimization at first glance, but it has too many disadvantages when considered across various different contexts. For the GPU, rather than lowering its consumption when there is nothing to do, I really should be focusing on giving it more work instead.
The issues with contracting the input space as the processing goes along is not something I want to deal with.
2:30pm. Ok, enough. Stop taking breaks. Focus me. I can put a few hours into this.
2:50pm. My focus is high, but let me step away from the screen for a bit. I am converging to some conclussions, but need some time to elaborate them.
3:10pm. I took a short nap to think about it.
I've come to a decision - RNNs are out. There is no way to implement RNN agents efficiently on the GPU. It would be different if I had a specialized chip with its own dedicates threads for every outer dimension, but with the GPU, even for 2p games I'd have to waste 50% of the computation on every calculation.
This is because while p1 is acting, p2 will have to wait. With a feedforward net this would not be a problem since it has no state that needs to be carried over, but recurrent nets do. With 6p games I'd need to waste 5/6ths of the computational capacity!
The solution to make this efficient would be to use a data structure to store intermediate states for each player. But a regular matrix cannot be used for this, the data structure would have to be immutable. Or something like an array of arrays...
But either way, fetching these states from such a data structure in order to assemble them into a matrix would be expensive and complicated. And RNNs in particular are harder to train than feedforward nets.
Since transformers are feedforward nets, I should use them instead.
Also, when trained using backprop, RNNs are effectively feedforward anyway.
3:20pm. Forget it, I can safely leave RNNs out of consideration. Who knows if they will become dominant even on future hardware.
It is feedforward nets all the way for me from here. I should consider transformers for full NL holdem.
Hmmm, let me read that MLPmixer paper.
...Agh, let me reset the router.
https://arxiv.org/abs/2105.01601 MLP-Mixer - An all-MLP Architecture for Vision
I want to see what conclusions I can draw from this.
https://arxiv.org/abs/2007.07368 Explicit Regularisation in Gaussian Noise Injections
I might as well read this. I had the idea of injecting noise, and this should improve my understanding of what I was suggesting.
3:50pm. Ok, I think I get what MLP mixer is doing. It is interesting that it works so well.
Let me read the noise paper.
3:55pm. Ok, done. Some math stuff, not a big deal. Let me take a look at the MLP mixer again. What regularization is it using? It was not in the pseudocode.
In particular, we use RandAugment [12], mixup [56], dropout [42], and stochastic depth [19].
Hadn't heard about the first two.
4:15pm. Done with lunch. It was a lot earlier today than I expected.
Mixup trains a net on convex combination of pairs of examples and their labels. Interesting.
4:20pm. Let me still my thoughts, they are too scattered around.
RNNs are out. Transformers, that is feedforward nets that can take in arbitrary length sequences will require some extra consideration for efficiency, but it is not a big deal.
4:30pm. The tabular methods, I'd also put into the feedforward category.
...Input filtering is out, just as I'd said earlier. Let me think through that again.
It is just so hard to start for me. Overcoming my internal inertia is a huge hurdle.
4:35pm. Let me step away from the screen.
6:50pm. Oh, it seems I stepped away for a while.
I was thinking about long term credit assignment and long term memory. Value functions and TD learning are a clue.
Now that I have my new training method, I can see gradient coming into the inputs as viable targets.
When composing networks which have different timescales, backprop would not work as by the time the slow net gets the targets, the fast net's trace would have already decayed.
The answer is to do what TD does - predict the targets. If it were before, that would have been absurd since the gradients could be badly unhinged, but once I rescale them, I'd get what I need.
Once the issue of passing rewards at different timescales becomes an issue, the only choice is to start predicting them. This is the answer, it cannot be anything else.
7pm. There was a period (probably in 2019) where I looked around for ways of making recurrence propagate info beyond the backprop horizon. There were some papers on that, but they were just hacks in the end. Fancy math, some benchmarks, but nothing that clicked.
I think this situation here is the same as with unsupervised learning. First there were autoencoders and they worked on just toys. Then there were GANs which worked on bigger toys, but couldn't possibly be the answer due to instability. Then there was the training scheme in the GMN paper that could make the stable.
Forget about unrolling RNNs for arbitrary number of steps. The only consideration RNNs should have is whether I want weight sharing across timesteps. I already know that LSTMs and regular RNNs cannot propagate info for very long distances regardless.
What can do it even with backprop are tree-like architectures in which the higher levels take info at increasingly longer timescales. Even if some trick exists that could make info propagate for a lot longer in RNNs are now, do I really think it will ever be the case that it will optimize for something like 2 ** 32 timesteps?
A tree NN could do that with 32 layers without trouble.
But even if the training could work in theory, the computer cannot possibly hold that many inputs in memory.
So the answer is the same as in TD learning - predict. When the memory is not big enough, use your own internal state as reference.
7:15pm. I do not know what the right scheme for doing that is, but it does not matter.
What matters is that I believe in it.
I've been agonizing about RNNs and architectures in the past, but that MLP Mixer paper proves that architectures are ultimately a technicality.
7:20pm. I should believe that everything I need is here. It will take a few more innovations to get where I want, but those will just be new spins on an old thing.
When truly long-term memory become necessary, the method to make synthetic gradients work will appear. For all I know maybe what exists today works well, though I doubt it.
7:30pm. Let me close here. My progress is snail-like, but tomorrow I will try to get the training procedure and the uniform agent to work as a sketch.
I am going to get through my inhibitions even if it kills me. Today I might not have done much programming, but I did not spent this time playing games or reading novels. I spent it trying to iron out my beliefs. Eventually the effort today will turn into motivation to do programming tomorrow. I will build the strength to travel the path.
I just had to think about the issue of the sequence length. It is inevitable that trying to decide between feedforward nets and recurrent ones would trigger this debate in me.
Not having a good reply for long term credit assignment is one of my weaknesses, but it is not that there aren't any hints in the present on what will the truth of tomorrow."
Hack in nvh264enc in place of x264enc
Turns out that the H.264 encoding hardware in my GPU is way more efficient and performant than encoding on my 9-year-old CPU. Isn't that wild?
Seriously, I'd kind of written off the NVIDIA encoding stuff since it's hard to use from within a container, but seeing the shift from pegging an entire CPU core to the entire Hypcast server using about 30% CPU encoding one of the highest-quality stations I can receive at a higher bitrate than before is pretty convincing.
The encoding parameters are a bit arbitrary, in general the goal is "low (enough) latency but high quality." Doing a 15-frame rate control lookahead at the same time as all the other low-latency settings might be a weird choice, but at least for now I'm satisfied with the pipeline startup time (which is all I think this would affect).
The biggest problem is that NVENC doesn't support the Constrained Baseline profile, which is what WebRTC requires. I can successfully negotiate the Baseline profile with Chromium, but it looks like I can no longer play the video stream in Firefox. To be honest I've had big issues with the video skipping in Firefox even with the x264 encoder, so I tend to use Chromium / Chrome with Hypcast anyways, but I certainly don't love giving up cross-browser support.
Add files via upload
This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!
In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
C (#1)
- Started fresh
Basically gonna do all of this with c
That reminds me I should change those names from cpp to cpp
-
Renamed files
-
Wrote my thoughts out before I blackout
TODO: Everything
And edit the makefile so it can compile both c and cpp, and without me thinking about it
Also research if c object files (Why would they call them that if there is no objects) can be compiled with c files. That way I might just be able to have a main.cpp or something. Damn I'm tired
- Did a cat program in c.
Not the most impressive thing, but it's still ok.
I like not using objects, but not using namespaces is where I draw the line. I kinda make my own namespaces with the underscore.
Also can't overload functions which sucks
- Just testing out my buffer. It works, kinda
It only works if you pipe in the file, because it needs that EOF to get out of the first character loop.
In cpp this wouldn't be a problem because they got things like getline.
I could still try that character by character stack thing, which is really why I'm pushing this commit.
- Update Language goal.md
Thinking about writing notations in relation to what I'm doing. Probably gonna put this on hold for a week while I hack something up in js to do this.
Co-authored-by: Leif Messinger [email protected]