Skip to content

Latest commit

 

History

History
1007 lines (662 loc) · 56.3 KB

2021-11-09.md

File metadata and controls

1007 lines (662 loc) · 56.3 KB

< 2021-11-09 >

3,466,161 events, 1,861,733 push events, 2,823,274 commit messages, 220,950,874 characters

Tuesday 2021-11-09 03:00:31 by Annika

Implement some more stuff

This is really hacky and I don't like it.

I ought to either rewrite m68kdecode to work with my data structures, or use m68kdecode's data structures myself. However, I think I'd make better decisions with that once I have some experience, so, for now, ugly code it is.


Tuesday 2021-11-09 06:43:44 by Ryan Duffy

Fix end-to-end tests post Next.js migration

  • In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true).
  • Add necessary parts of the test folder to public. This means that we will be able to run against preview branches as well.
  • Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times.
  • Remove the dev-server.js file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the test folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment.
  • Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests.
  • Add a waitingFor field to the waitUntil method used in tests. It's really frustrating when a test fails and all it says is timed out. Like... timed out while waiting for what? Test failures should guide you to the place of failure as quickly as possible.
  • By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure.
  • Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose.
  • Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now.
  • Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the mock query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task.
  • Add test/run and test/mock/run scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc.
  • Add README instructions about .env files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to run, getting them to pass is a future us problem!

I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR:

  • any merge conflicts mean actions will not run at all. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts.
  • When using actions that are type javascript your inputs get automatically turned into environment variables, which is convenient. However when using composite actions this won't happen. You have to manually pass the environment variables in with an expression.
  • The checkout action that we use all the time performs a strict clean of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do git checkout XXXX in (with the checkout action), you either need to checkout first, or you can use [clean: false](https://github.com/actions/checkout#usage) I think, though I haven't actually tried this.
  • Actions are somewhat limited. More details here: actions/runner#646

Fixes #4313


Tuesday 2021-11-09 06:56:35 by alk3pInjection

drm: Handle dim for udfps

  • Apparently, los fod impl is better than udfps cuz it has onShow/HideFodView hook, which allows us to toggle dimlayer seamlessly.

    Since udfps only partially supports the former one, we'd better kill dim in kernel. This is kinda a hack but it works well, bringing perfect fod experience back to us.

Co-authored-by: Art_Chen [email protected] Signed-off-by: alk3pInjection [email protected] Change-Id: I80bfd508dacac5db89f4fff0283529c256fb30ce


Tuesday 2021-11-09 08:25:38 by Otherwa

Update index.md

commited change because fuck you thats why


Tuesday 2021-11-09 08:33:48 by Da_Racci

[SKIP] god im so sick of this shit just please fix


Tuesday 2021-11-09 08:52:54 by Jason Ekstrand

drm/i915: Use a table for i915_init/exit (v2)

If the driver was not fully loaded, we may still have globals lying around. If we don't tear those down in i915_exit(), we'll leak a bunch of memory slabs. This can happen two ways: use_kms = false and if we've run mock selftests. In either case, we have an early exit from i915_init which happens after i915_globals_init() and we need to clean up those globals.

The mock selftests case is especially sticky. The load isn't entirely a no-op. We actually do quite a bit inside those selftests including allocating a bunch of mock objects and running tests on them. Once all those tests are complete, we exit early from i915_init(). Perviously, i915_init() would return a non-zero error code on failure and a zero error code on success. In the success case, we would get to i915_exit() and check i915_pci_driver.driver.owner to detect if i915_init exited early and do nothing. In the failure case, we would fail i915_init() but there would be no opportunity to clean up globals.

The most annoying part is that you don't actually notice the failure as part of the self-tests since leaking a bit of memory, while bad, doesn't result in anything observable from userspace. Instead, the next time we load the driver (usually for next IGT test), i915_globals_init() gets invoked again, we go to allocate a bunch of new memory slabs, those implicitly create debugfs entries, and debugfs warns that we're trying to create directories and files that already exist. Since this all happens as part of the next driver load, it shows up in the dmesg-warn of whatever IGT test ran after the mock selftests.

While the obvious thing to do here might be to call i915_globals_exit() after selftests, that's not actually safe. The dma-buf selftests call i915_gem_prime_export which creates a file. We call dma_buf_put() on the resulting dmabuf which calls fput() on the file. However, fput() isn't immediate and gets flushed right before syscall returns. This means that all the fput()s from the selftests don't happen until right before the module load syscall used to fire off the selftests returns which is after i915_init(). If we call i915_globals_exit() in i915_init() after selftests, we end up freeing slabs out from under objects which won't get released until fput() is flushed at the end of the module load syscall.

The solution here is to let i915_init() return success early and detect the early success in i915_exit() and only tear down globals and nothing else. This way the module loads successfully, regardless of the success or failure of the tests. Because we've not enumerated any PCI devices, no device nodes are created and it's entirely useless from userspace. The only thing the module does at that point is hold on to a bit of memory until we unload it and i915_exit() is called. Importantly, this means that everything from our selftests has the ability to properly flush out between i915_init() and i915_exit() because there is at least one syscall boundary in between.

In order to handle all the delicate init/exit cases, we convert the whole thing to a table of init/exit pairs and track the init status in the new init_progress global. This allows us to ensure that i915_exit() always tears down exactly the things that i915_init() successfully initialized. We also allow early-exit of i915_init() without failure by an init function returning > 0. This is useful for nomodeset, and selftests. For the mock selftests, we convert them to always return 1 so we get the desired behavior of the driver always succeeding to load the driver and then properly tearing down the partially loaded driver.

v2 (Tvrtko Ursulin):

  • Guard init_funcs[i].exit with GEM_BUG_ON(i >= ARRAY_SIZE(init_funcs)) v2 (Daniel Vetter):
  • Update the docstring for i915.mock_selftests

Signed-off-by: Jason Ekstrand [email protected] Reviewed-by: Daniel Vetter [email protected] Cc: Tvrtko Ursulin [email protected] Signed-off-by: Daniel Vetter [email protected] Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]


Tuesday 2021-11-09 09:33:31 by Garra_Blanca_03

Update EggmanApplication.java

I've come to make an announcement; Shadow The Hedgehog's a bitch ass motherfucker, he pissed on my fucking wife. Thats right, he took his hedgehog quilly dick out and he pissed on my fucking wife, and he said his dick was "This big" and I said that's disgusting, so I'm making a callout post on my twitter dot com, Shadow the Hedgehog, you've got a small dick, it's the size of this walnut except WAY smaller, and guess what? Here's what my dong looks like: PFFFT, THAT'S RIGHT, BABY. ALL POINTS, NO QUILLS, NO PILLOWS. Look at that, it looks like two balls and a bong. He fucked my wife so guess what? I'm gonna fuck the Earth. THAT'S RIGHT THIS IS WHAT YOU GET, MY SUPER LASER PISS! Except I'm not gonna piss on the earth. I'm gonna go higher. I'M PISSING ON THE MOON! HOW DO YOU LIKE THAT, OBAMA? I PISSED ON THE MOON YOU IDIOT! YOU HAVE 23 HOURS BEFORE THE PISS DROPLETS HIT THE FUCKING EARTH NOW GET OUT OF MY SIGHT BEFORE I PISS ON YOU TOO.


Tuesday 2021-11-09 12:21:42 by Marko Grdinić

"11:05am. Let me start watching tutorials.

https://youtu.be/hTh9NpGPeYw HOW TO SHADE YOUR DRAWINGS - Tutorial

https://www.youtube.com/watch?v=z0zrrbzmUEE HOW TO PICK COLORS FOR YOUR ART (easy)

https://youtu.be/9QEGEBK6nIY How to Learn MORE Digital Painting (Intermediate)

https://www.youtube.com/watch?v=6JrfSb4mBaE How to Paint - Digital Painting Tutorial for Beginners

https://youtu.be/DrIsAiUpEXk Portrait Painting with FULL Commentary! DTIYS!

Let me go through the above. Then I'll start going through the two courses.

https://www.youtube.com/playlist?list=PLgGbWId6zgaVDPo5U8kGa1wM3CpRrjRXy Fundamentals of Digital Painting (Full Course)

https://www.youtube.com/playlist?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl The Digital Painting MEGA Course - Beginner to Advanced

Focus me, stop surfing the subs. Let me watch this stuff.

https://youtu.be/hTh9NpGPeYw?t=214

This is the way Blender does it in solid mode. It is nice and simple.

11:30am. I am not sure what to think about putting the shading on top of the lineart. The only thing that looked good for me is when he removed the lines completely.

https://youtu.be/OyWpAU_SVD0 HOW TO COLOR YOUR DRAWINGS LIKE A BOSS - Tutorial

Now this should be exactly what I am looking for. Let me watch it.

11:45am. I wasn't impressed when shading was added on top of the line art, but it works quite nice with color. He did a whole bunch of stuff here, but ultimately it is just layering stuff on top of each other. I'll have to get familiar with all the knobs to make that work, but otherwise I should be able to figure it out.

https://youtu.be/Wvng0oPkNy4 HOW TO COLOR YOUR DRAWINGS (in any software)

Let me just watch this and then I'll move on to painting. Library of Ruina is all done in that style and it is interesting.

11:55am. This video here is a good starting point.

12:10pm. Ok, I have a sense for it. This is different from what I expected though. Right now, in that Yumeko tracing video, just how were the colors and the shading done?

https://youtu.be/mvPQO4fRMjA?t=1530

Here is the line art. But after that the shadows are added as slabs. There are plenty of different ways of doing it.

https://youtu.be/mvPQO4fRMjA?t=1643

I am a fan of how this vector art looks. The mainstream stuff you see on the adobe site is crappy, but this looks good without being overly involved like Marc's stuff. I do not want to spent too much time on the shading + coloring parts, but I want to improve over bare flats.

12:15pm. I could try to adopt Marc's style, but then I'd be stuck on doing the shading and the lighting on everything.

12:40pm. Doing some /ic/ posting. What is next?

https://youtu.be/9QEGEBK6nIY How to Learn MORE Digital Painting (Intermediate)

https://www.youtube.com/watch?v=6JrfSb4mBaE How to Paint - Digital Painting Tutorial for Beginners

https://youtu.be/DrIsAiUpEXk Portrait Painting with FULL Commentary! DTIYS!

Let me leave these for after the breakfast."


Tuesday 2021-11-09 16:00:06 by Jakob Kirsch

rng: my cpu decided to be funny and clear the lowest bit of the tsc when you call rdtsc so it's always even -> shift it to the right by 1, fuck you intel


Tuesday 2021-11-09 16:28:18 by Peter Willis

New feaure! Don't remove temporary TF_DATA_DIR (#9)

I've found an interesting case where Terraform does something.... annoying. That never happens........

If you're applying a Lambda function and using:

data "archive_file" "function_zip" {
  type = "zip"
    output_path = "${path.module}/manager.zip"
}

Then the 'terraform plan' will make the ZIP file, but when you run 'terraform apply' later, the module path has been removed and replaced by the dynamic TF_DATA_DIR function of terraformsh. So you end up not being able to apply your Lambda because the zip file is gone.

This adds the option '-n' (NO_CLEANUP_TMP=1) which will prevent removing the dynamic TF_DATA_DIR folder, so the .zip file created during 'terraform plan' will remain for a 'terraform apply' later.

Also added some logic so that the _cleanup_tmp function will not remove the temporary diretory if the last command failed, so you can troubleshoot the failure. (I'm sure this is going to bite me in the ass later...)


Tuesday 2021-11-09 16:44:51 by Ryan Duffy

Fix end-to-end tests post Next.js migration

  • In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true).
  • Add necessary parts of the test folder to public. This means that we will be able to run against preview branches as well.
  • Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times.
  • Remove the dev-server.js file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the test folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment.
  • Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests.
  • Add a waitingFor field to the waitUntil method used in tests. It's really frustrating when a test fails and all it says is timed out. Like... timed out while waiting for what? Test failures should guide you to the place of failure as quickly as possible.
  • By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure.
  • Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose.
  • Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now.
  • Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the mock query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task.
  • Add test/run and test/mock/run scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc.
  • Add README instructions about .env files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to run, getting them to pass is a future us problem! I'm disabling the tests that seem to always fail for now.

I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR:

  • any merge conflicts mean actions will not run at all. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts.
  • When using actions that are type javascript your inputs get automatically turned into environment variables, which is convenient. However when using composite actions this won't happen. You have to manually pass the environment variables in with an expression.
  • The checkout action that we use all the time performs a strict clean of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do git checkout XXXX in (with the checkout action), you either need to checkout first, or you can use [clean: false](https://github.com/actions/checkout#usage) I think, though I haven't actually tried this.
  • Actions are somewhat limited. More details here: actions/runner#646

Fixes #4313


Tuesday 2021-11-09 16:51:36 by Steven Lamp

Lets just use high res images because fuck your browser


Tuesday 2021-11-09 17:04:26 by quizcanners

Resharper

Separate Examples

fixed

lc

Shader dependencies

Organizing

Shadow Data

Moved Noise Texture MGMT

moved some shaders

Directory Name fixes

Update Painter_Data_Demo_MoveToResources.asset

cfg2

picker

scene

fixes

shader fix

things

fixes

  • god mode

Blurred Screen

SDF from Alpha

cleanup

pix art

small change

asdsad

void Inspect

ывыв

Namespace Fixes

God Mode

Refucktoring

asd

Editor

light caster

IsEntered

Bumped

Shaders

Bevel Shader stuff

more stuff

things

Inspector rework No change MGMT

isFoldout

Update TutorialScene.unity

Perf tex fix

PP_

Old Light Casters removed

Using Refactory

Inspector changed

  • OVerride

Lerp

PEGI_Override

Versioning

Moved to painter

Good stuff

Grand Renaming

saving loading

even more stuffs

moved some folders

data

Buildable

compile_local

Some materials

  • meta

stuff

Moved Out Procedural UI

stuff

K words

asd

ex

asd

asd


Tuesday 2021-11-09 17:09:19 by parolonn

full integration of schedules

assignment indexes are now 3d. fuck you


Tuesday 2021-11-09 17:24:39 by petrero

50.1. Mercure Hub's JWT Authorization

Can anyone publish a message to any topic on a Mercure hub? Definitely not. So how does the Mercure Hub know that we are allowed to publish this message? It's entirely thanks to this long string that we're passing to the Authorization header.

Where does this come from? It turns out, it's a JSON web token. Copy that huge string... then head over to jwt.io: a lovely site for working with JSON web tokens - or JWT's. If you're familiar with how JWT's work, awesome. If not, here's a little primer.

A JWT + Mercure Primer

  • Scroll down a bit to find a JWT editor. Paste in the encoded token. So this weird string here can actually be decoded to this JSON. And... you don't need a secret key to do it: the long string is basically just a base64 encoded version of this JSON. Anyone can turn this string into this JSON.

So when we send this long string to the server, what we're really sending is this JSON data. For us, the subscribe part isn't important... and neither is the payload. But the publish part is important. This basically says:

Hi Mercure Hub! Guess what! I have permission to publish to any topic. Cool, huh?

Ok... but why does the Mercure server trust this? Can't anyone create a JSON web token that claims that they can publish to all topics? Yea! But those wouldn't be signed correctly unless they have the "secret".

When you run a Mercure Hub, you give it a "secret" value... which, by default - and for our Mercure Hub - is ! ChangeMe ./bin/console make:migration This is the value that you see in our .env file.

Back over on jwt.io, look at the bottom. It says "invalid signature". When a JWT is created, it's signed by a secret key. When someone uses a JWT, after decoding it - which anyone can do - they are then supposed to verify the signature of the token. Right now, it's trying to verify the signature of our JWT... but using the wrong secret. If we paste in our real secret instead... it's verified!

This can... be a bit technical. The point is this: in order to generate a JWT that will have a valid signature, you need the secret. And while anyone can read a JWT, if you mess around with its contents, the signature will fail. That's why the Mercure Hub trusts us when we send a JWT that says we can publish to any topic: the signature of our message is valid. That means it was generated by someone who has the secret key.

Check this out: let's regenerate this JWT using the same "payload" but signed using the wrong secret... something a bad user might try to do. Copy the new JWT... update the curl command in our scratchpad... copy the whole command... and paste it into the terminal. Hit enter. Unauthorized! The Mercure Hub can totally read the JSON in this message, but it sees that the signature failed and does not publish the message.

Change back to our old key in the scratch pad. And at the browser, use the correct secret: ! ChangeMe ./bin/console make:migration

Simplifying the Payload

  • To simplify things, change the payload to just the part we need. So remove the subscribe part - we're not trying to get access to subscribe to anything - and also remove payload. This is all we really need: some JSON that claims that we can publish to any topic signed with the correct secret. If you ever need to create a JWT by hand, this is how you do it: create the JSON you want and have something - like this site - sign it with your secret.

Copy the new, shorter JWT... and paste it in our scratchpad. Copy the entire command, paste it at your terminal and... yes! It works! In our browser, the listening tab shows a second message.

Publishing a Turbo Stream

  • Enough about authorization & JWT. In the real world, as long as we have the correct MERCURE_SECRET configured in our app, all of this will be handled automatically thanks to the Mercure PHP library. Internally, it will use the secret to generate the signed JWT for us.

But before we start publishing messages from our code, let's look closer at the data POST parameter. So far, we've been sending JSON. And, in theory, we could write some JavaScript that listens to this topic and does something with that JSON. But remember: the turbo_stream_listen() function activates a Stimulus controller that is already listening to this topic. It's listening and waiting for a message whose data isn't JSON, but HTML.

Check it out: over in our scratch pad, instead of setting the data to JSON, I'll paste in a turbo stream. It's a little ugly because it's all on one line, but it's valid: action="update" target="product-quick-stats" with some dummy content inside.

Let's first see if this message shows up inside our browser tab. Oh! It actually stopped listening. It probably hit a listening timeout - that's something you can configure or disable in Mercure. I'll refresh.

Now, go copy the command... find your terminal, paste, hit enter... and head back to the browser. No surprise: here's our message with the Turbo Stream HTML. But the really cool thing is back on our site. Scroll up. Yes! It updated the quick stats area! As soon as we published the message, the JavaScript from the Stimulus controller saw the message and passed the turbo-stream HTML to the stream-processing system. That's so cool.

Of course, we aren't normally going to publish via the command line & curl: we're going to publish messages via PHP... which is way easier. Let's do that next.


Tuesday 2021-11-09 17:24:39 by petrero

49.2. Listening & Publishing

Listening in JavaScript via the Stimulus Controller

  • Ok, step 1: open templates/product/reviews.html.twig, which is the template that holds the entire reviews turbo frame. At the top, or really anywhere, add a div. Where its attributes live, render a new Twig function from the UX library we installed a few minutes ago - turbo_stream_listen() - and pass this the name of a "topic"... which could be anything. How about product-reviews. Then, close the div.

I know, that looks kind of weird. To see what it does, go refresh a product page... and inspect the reviews area to find this div. Here it is.

Ok: this div is a dummy element. What I mean is: it won't ever contain content or be visible to the user in any way. Its real job is to activate a Stimulus controller that listens for messages in the product-reviews topic. You can see the data-controller attribute pointing to the controller we installed earlier as well as an attribute for the product-reviews topic and the public URL to our Mercure hub.

Viewing a Mercure Topic in your Browser

  • Go to your network tools and make sure you're viewing fetch or XHR requests. Scroll up. Woh! There was a request to our Mercure hub with ?topic=product-reviews. The Stimulus controller did this.

But the really interesting thing about this request is the "type": it's not fetch or XHR, it's eventsource. Right Click and open this URL in a new tab. Yup, it just spins forever. But not because it's broken: this is working perfectly. Our browser is waiting for messages to be published to this topic.

Publishing Messages via curl

  • We are now listening to the product-reviews topic both in this browser tab and, apparently, from some JavaScript on this page thanks to the Stimulus controller we just activated. So... how can we publish messages to that topic?

Basically... by sending a POST request to our Mercure hub. Over in its documentation, go to the "Get Started" page and scroll down a bit down. Here we go: publishing. This shows an example of how you can publish a basic message to Mercure. Copy the curl command version. Then, over my editor, I'll go to File -> "New Scratch File" to create a plaintext scratch file. I'm doing this so we have a convenient spot to play with this long command.

In fact, it's so long that I'll add a few \ so that I can organize it onto multiple lines. This makes it a bit easier to read... but I know, it's still pretty ugly.

Before we try this, change the topic: the example is a URL, but a topic can be any string. Use product-reviews. And at the end, update the URL that we're POSTing to so that it matches our server: 127.0.0.1:8000.

We'll talk about the other parts of this request in minute. For now, copy this, find your terminal, paste and... hit enter! Okay: we got a response... some uuid thing. Did that work?

Spin back over to your browser tab. Holy cats, Batman! It showed up! Our message contained this JSON data... which also appears in our tab.

The Parts of a Publish Request

  • Even if you're not super comfortable using curl at the command line - honestly, I do this pretty rarely - most of what's happening is pretty simple. First: we're sending a topic POST parameter set to product-reviews and a data POST parameter set to... well, whatever we want! For the moment, we're sending some JSON data, which is passed to anyone listening to this topic.

At the end of the command, we're making this a POST request to our Mercure Hub URL. But what about this Authorization: Bearer part... with this super long key? What's that? It's a JSON web token. Let's learn more about what it is, how it works and where it came from next. It's the key to convincing the Mercure Hub that we're allowed to publish messages to this topic.


Tuesday 2021-11-09 17:24:39 by petrero

52.1. Turbo Stream for Instant Review Update

When we submit a new review, we update two different parts of the page. First, the review list and review form. And second, the quick stats area up here.

Over in ProductController, in the reviews action, we do this by returning a turbo stream: reviews.stream.html.twig is responsible for updating both spots.

Cool, but remember that the reviews list and review form live inside of a turbo frame. And so, before we started messing around and doing crazy stuff with Turbo Streams, we updated that section simply by returning a redirect to the reviews page on success. The Turbo Frame followed that redirect, grabbed the matching from that page and updated it here.

Unfortunately... as soon as we wanted to also update the quick stats area, we had to change completely to rely on turbo streams. The problem is that we can't return a turbo stream and a redirect from the controller.... so we chose to return a stream... which means that the stream needs to update both sections of the page.

Returning a Redirect And Publishing a Stream

  • Okay. So why are we talking about all of this again? Because now that we have Mercure running, we can, in a sense, return two things from our controller. Check it out: copy this dummy Mercure update code, remove it... and paste it down in the success area.

We're updating the product-reviews stream, which is the stream that we're listening to thanks to our code in _reviews.html.twig. Back in the controller, instead of returning a stream, copy the render line, delete that section, paste inside the update... and fix the formatting. Oh, also change this to renderView(): render() returns a Response object... but all we need is the string from this template. That's what renderView() gives us.

Thanks to this, our controller will now redirect like it did before... but it will also publish a stream to Mercure along the way.

Let's try it. Refresh the page... and scroll all the way down to the bottom. I want to trigger the weather widget Ajax call just so that we can cleanly see what happens with the network requests when we submit. Clear out the Ajax requests... then add a new review.

Cool! It looks like that worked! Check out the network requests. The first is the POST form submit. This returned a redirect, the frame system followed that redirect, found the frame on the next page, and updated this area. The normal Turbo Frames behavior. Then our stream caused the quick stats area to update... and it also re-updated the reviews area... because, right now, our stream template is still updating both things.


Tuesday 2021-11-09 18:01:43 by Marko Grdinić

"2:20pm. I should resume so let me do it. Close those /a/ tabs, and start watching. It is time for digital painting. Let me go through the 3 links in the previous entry.

I've been thinking about Marc's work. His lineart in isolation is not the most appealing, but his colored work is very good and really works with it. His finished works always feel very warm to me.

For Simulacrum it might be worth going for a more cool impression. That Yumeko trace was really good in that regard.

Once I study painting for a while, I really want to paint an apple. What I saw on the CSP official site inspired me a little and I want to get a sense of why I got the impression that I did.

Also, I am thinking about that watering can that I did. Just adding a flat color of yellow did not work for me. It kind of works when it is black and white, but black and yellow is wrong. That pencil work was shading, but it is not the kind of shading that really meshes with how I attempted to color it. There should be more to it.

I am convinced that I should study painting for a while. It will give me some inspiration.

https://youtu.be/9QEGEBK6nIY?t=458

This guy is pretty good. I do not really understand what painting is about though. I feel like I am not starting at the beginning.

https://youtu.be/9QEGEBK6nIY?t=564

Here he is talking about strokes. This is definitely of interest to me.

https://youtu.be/9QEGEBK6nIY?t=664

Here he is talking about banding.

2:40pm. https://youtu.be/9QEGEBK6nIY?t=826

What is rendering?

2:45pm. https://youtu.be/9QEGEBK6nIY?t=853

This is impressive. So this is how painting works.

2:40pm. I think I get the sense of what painting is vs lineart and then coloring.

https://youtu.be/-Nt9fa8jZUE The Best Brush for Digital Painting (Beginners)

Let me watch some more of this. I am into it. Sinix Design. I'll remember this channel.

https://youtu.be/-Nt9fa8jZUE?t=275

Wow, the eyes really come to life here.

https://youtu.be/-Nt9fa8jZUE?t=321

This is a pretty good sketch. He could draw manga no problem.

https://youtu.be/-Nt9fa8jZUE?t=378

Color wheels are incredibly superior to being stuck using these sliders.

https://youtu.be/-Nt9fa8jZUE?t=480

He talks about her boobs being up and wanting to give her a sense of energy, but the face looks so dour.

https://youtu.be/-Nt9fa8jZUE?t=519

Now, go master digital painting with just one digital brush! Once you are good, you can worry about custom brushes.

Yeah, this is mine kind of video. Marc has so many phases in it

https://youtu.be/zC3OxonJcXQ Painting like a Sculptor

Let me watch this.

https://youtu.be/zC3OxonJcXQ?t=105

This is interesting. I saw some dry brushes in CSP. Now I know what they do.

https://youtu.be/zC3OxonJcXQ?t=559

I think I am starting to get it. Right now the only thing I do not understand is how to pick colors properly.

3:25pm. Played around with them for a bit.

Done with the painting like a sculptor vid. Yeah, this is nice. I get a sense of what is happening.

Rather than some of the other links, I do want to see how Ganev does it rather than lecture me all the time.

https://www.youtube.com/playlist?list=PLz4l1wAU1EfPEHciLFlL-45SP8Xz--jtk

Er, does he actually have any videos where he paints. As talented as he is as a comedian I am trying to learn art here.

https://www.youtube.com/watch?v=gzeoi2Jj7P0 Learning How To Shade Portraits

Let me check this out.

Holy shit, I am trying to check out his stuff on Instagram, but the thing won't let me register. When I try to create an account it says my mail is already taken, but when I try to recover it, it says the user has not been found.

https://www.artstation.com/angelganev

Nevermind Instagram. Let me check out the stuff here.

https://youtu.be/gzeoi2Jj7P0?t=166

Oh, he is doing something here.

His art itself is decent. He has a thing for skinny semi-realistic girls.

4:20pm. Done with that video. I think I see it. What he is doing is no different from painting. I think I've internalized some concepts. The thing that sticks out the most for me is stroke direction and size, and hard and soft edges. Sinix made that stick for me.

Ganev introduced me to some smart uses of the eraser.

Right now I am thinking that instead of using the soft eraser for that can, it might have been better for me to use the white pencil instead. Since this is digital art, I can also use colored pencils as well. I never even thought of bringing that in, but something like a yellow pencil might be suitable for that can.

It strikes me as a smarter idea than simply trying to trying to fill in the yellow.

Pencils have potential. They are a bit like soft brushes.

4:30pm. What is next? I am a bit in a daze here.

https://www.youtube.com/watch?v=6JrfSb4mBaE How to Paint - Digital Painting Tutorial for Beginners

https://youtu.be/DrIsAiUpEXk Portrait Painting with FULL Commentary! DTIYS!

Hmmm, I should watch these before moving on to courses.

4:35pm. Yeah, let me go through them. Let me just take a break here first. Let me resume.

Actually, let me try out something first.

Yeah, the colored pencils work nice. Before I do any color work though I'll want to come to an understanding of how the GSV color palete works. I need some basic color knowledge.

5pm. Ah, I was wrong. The eraser itself does not have color. It seems CSP distinguishes white pixels from those that haven't been painted. I see. That means Gaven only uses it to create hard edges.

And the fill tools have a cool thing where they can be used to only color the touched pixels.

Hmmmm...no, I could not have used that for the can itself. The pencil creates a spread rather than a firm line.

5:05pm. Ok, my curiosity has been sated. Let me watch those vids.

Actually me move to the second. I already had my fill of these painting vids for the day.

5:15pm. I skimmed the other one as well.

Now, courses.

https://www.youtube.com/playlist?list=PLgGbWId6zgaVDPo5U8kGa1wM3CpRrjRXy Fundamentals of Digital Painting (Full Course)

https://www.youtube.com/playlist?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl The Digital Painting MEGA Course - Beginner to Advanced

I want to hear it explained from start to finish. Watching painters do their thing gave me a feel for the proces. Now I'll take it in straight up. After that I'll go back to Priestley's and the Drawing Database courses, back to drawing that is. I'll want to practice that fundamental skill. I'll play a bit with shading using colored pencils. After that I'll finally go back to Blender and play around with the female form more until I have it mastered. The male form is something I'll have to work on as well.

5:20pm. No colored pencils would not work for shading. I'd just get left with white flakes everwhere. For color I'd really need a brush.

Forget that for now. It is not like I can't try painting. Let me check out the courses. I'll watch them all the way through both.

https://youtu.be/Xef_D2HNAq8?list=PLgGbWId6zgaVDPo5U8kGa1wM3CpRrjRXy&t=60

I am not sure, these aren't that great. They feel somewhat amateurish.

https://youtu.be/Xef_D2HNAq8?list=PLgGbWId6zgaVDPo5U8kGa1wM3CpRrjRXy&t=110

What a weak grip on that blade. It is like she is holding a pencil.

5:35pm. No forget it. This course does not meet my minimum standards. Let me check out the next one.

5:30pm. No, I do not know how to overlay layers to get shading effects. Just now I tried putting some dark airbrush on a bunch of yellow and do not know how to darken it. Multiply is not it. I have no idea which setting it is supposed to be. Maybe it needs opacity, I don't know.

https://youtu.be/as9F5PpvG-A?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl 04 Anyone Can Learn to Paint

Here is the pep talk part. 30m. It seems this Udemy course is on photorealistic panting. That is nice. In the previous lecture he strongly recommends to get a tablet or an Ipad.

Oh, it is not just a pep talk, but a step through.

5:45pm. The video resolution is so crappy here.

5:55pm. I mostly skimmed the video, but maybe I should try to follow it along for real. Right now I am just in a watching mood.

https://youtu.be/xCjMuS2kjNg?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl&t=89

Let me pause here. It is lunch time.

6:35pm. https://youtu.be/vJMzUDx4cf4?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl 10 Making Things Simpler Through Interface Settings

Let me just hurry things along. Actually, I kind of want to call it a day here. it is miserable to watch tutorials at 6:35pm. It is one thing if I were doing art myself.

https://youtu.be/4Z4jCZRvN48?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl 11 Understanding Your Tools

Let me finish this and I will call it a day. Right now, I think my primary problem is that I do not understand layer, and the following lessons will be exactly about that, so that is something I should look forward to.

https://youtu.be/8zY1gWMpD9w?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl 12 Layers and Opacity

Ah, let me just watch this.

6:50pm. Rather than blending, for shading I really do need the opacity.

https://youtu.be/Lmc0Ka7YIvw?list=PLL7eX0No1f9nxLq_YIRD7QgugK95fn_wl 13 Layer Blend Modes

I'll leave this for tomorrow. This course is off to a pretty good start. Once I go through it, I should be fully familiar with the tools of the trade at the very least.

6:55pm. Right now, it is time for rest. Maybe I'll go to bed before 2am for once. Library of Ruina is stealing all my attention, and I am definitely going to play Lobotomy Corp when I am done with it. I'll def be playing Limbus when it comes out as well.

This is the life. Sigh, one day I am going to have rematch with both ML and trading, but for now I should just master art. I can't do much without the right tool. If I were doing it in a mouse or editing values by hand, I would not get far. So it is with art, and the same goes with ML. I need the right algos.

For art, I could in theory do everything with a mouse I could win a pen tablet, but at some point the efficiency gains translate in qualitative differences. To do research I need a hint of what the basis of intelligence is, and I do not have it. Years of messing with ML did not give me any insight into it at all. And the right algos cannot be derived directly from probability theory. At most, that knowledge will serve to check the reasoning.

I lack the crucial bit of inspiration. The solution is to just be patient and work on other things in the meanwhile."


Tuesday 2021-11-09 19:01:12 by petrero

54.2. Visually Highlighting new Items that Pop onto the Page

A Stimulus Controller to Fade Out

  • Before we try to use this somewhere directly, let's stop and think. If the goal is to remove this background after 5 seconds, then the only way to accomplish that is by writing some custom JavaScript. In other words, we need a Stimulus controller! In the assets/controllers/ directory, create a new file called, how about, streamed-item_controller.js. I'll paste in the normal structure, which imports turbo, exports the controller and creates a connect() method.

Before we fill this in, go over to _review.html.twig and use this. I'll break this onto multiple lines.. cause it's getting kind of ugly. Copy the class name, but delete the custom logic. Replace it with a normal if statement: if isNew|default(false), then we want to activate that new Stimulus controller. Do that with {{ stimulus_controller('streamed-item') }}. Oh, and pass a second argument, I want to pass a variable into the controller called className set to streamed-new-item.

I'm doing this for two reasons. First, it will now be the responsibility of the controller to add this class to the element. We'll do that in a minute. And second, while we don't need it now, making this class name dynamic will help us reuse this controller later.

Anyways, head back to the controller and define the value: static values = {} an object with className which will be a String.

Cool. Down in connect(), add that class to the element: this.element.classList.add() and pass this.classNameValue.

If we stopped right now... this would just be a really fancy way to add the streamed-new-item class to the element as soon as it pops onto the page.

So let's do our real work. Use setTimeout() to wait 5 seconds... and then... if I steal some code... remove this.classNameValue.

If we just did this, after five seconds, the green background would suddenly disappear. To activate the transition when the background is removed, add another class: fade-background.

If you wanted to be really fancy, you could wait until the transition finishes and then remove this class to clean things up. But this will work fine.

Let's try it! Refresh both tabs so that we get that new CSS... then go fill in another review. When we submit... good! A green background here... and in the other browser. If we wait... beautiful! It faded out! How nice is that?

Ok team, we're currently publishing updates to Mercure from inside of our controller. But the Mercure Turbo UX package that we installed earlier makes it possible to publish updates automatically whenever an entity is updated, added or removed. It's pretty incredible, and it's our next topic.


Tuesday 2021-11-09 19:44:54 by guccipanama

version № i-cant-even-remember

i hate myself and want to die


Tuesday 2021-11-09 19:54:34 by emilyann211

Update and rename LICENSE to LICENSE of a Tale

Sometimes a Love turns to Dislike and makes a person hate themselves, because they realize there was never love. To love someone and not have them love you back causes self hate! You were All Right ...and Left Me to Rot! To feel hurt, pain, neglected and to have someone feel obligated to be in someone's life. Rumors and Gossip ers know it all they know Everything! Their work made me realize that I didn't know Everything or didn't own anything. And to be nothing, to have all of me was my left and that side was no ones, and the sad but honest part about it, is all I thought I did wasnt Love at all. It was all my fault the only thing I deserve is die a gruesome death and to be tortured like I am. My true Real Life Story That no one has the permission to tell but me...

So honestly I hate You and I Hate Me Emily


Tuesday 2021-11-09 21:30:55 by Harry Garrood

Drop hpack, make it easier to use cabal-install (#3933)

Stack offers a relatively poor developer experience on this repository right now. The main issue is that build products are invalidated far more often than they should be. cabal-install is better at this, but using cabal-install together with hpack is a bit awkward.

Additionally, hpack isn't really pulling its weight these days. Current versions of stack recommend that you check your generated cabal file in, which is a huge pain as you have to explain to contributors to please leave the cabal file alone and edit package.yaml instead (the comment saying the file is auto-generated is quite easy to miss).

Current versions of Cabal also solve the issues which made hpack appealing in the first place, namely:

  • common stanzas mean you don't have to repeat yourself for things like -Wall or dependencies
  • tests are run from inside a source distribution by default, which means that if you forget to include something in extra-source-files you find out when you run the tests locally, rather than having to wait for CI to fail
  • the globbing syntax is slightly more powerful (admittedly not quite as powerful as hpack's, but you can use globs like tests/**/*.purs now, which gets us close enough to hpack that the difference is basically negligible).

We do still need to manually maintain exposed-modules lists, but I am happy to take that in exchange for the build tool not invalidating our build products all the time.

This PR drops hpack in favour of manually-maintained Cabal files, so that it's easier to use cabal-install when working on the compiler. Stack is still the only officially supported build tool though - the CI, contributing, and installation docs all still use Stack.

Stack also works a little better now than it used to, because I think one of the causes of unnecessary rebuilds was us specifying optimization flags in the Cabal file. (Newer versions of Cabal warn you not to do this, so I think this might be a known issue). To ensure that release builds are built with -O2, I've updated the stack.yaml file to specify that -O2 should be used.


Tuesday 2021-11-09 23:19:44 by Ryan Duffy

Fix end-to-end tests post Next.js migration #4346

  • In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true).
  • Add the test/examples and test/scripts folder to public. This means that we will be able to run against preview branches as well.
  • Add a GitHub action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times.
  • Remove the dev-server.js file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the test folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment.
  • Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests.
  • Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose.
  • Disable the worst of the failing tests.

That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to run, getting them to pass is a future us problem! I have been unable to get the mock tests to work to any extent on CI, although they do pass intermittently locally.

There are also tests (it doesn't seem to be consistent which ones) where the browser starts up and just never seems to navigate to the page properly. I haven't been able to figure out where it's getting stuck, but when tests time out now, it's usually for this reason.

I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR:

  • any merge conflicts mean actions will not run at all. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts.
  • When using actions that are type javascript your inputs get automatically turned into environment variables, which is convenient. However when using composite actions this won't happen. You have to manually pass the environment variables in with an expression.
  • The checkout action that we use all the time performs a strict clean of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do git checkout XXXX in (with the checkout action), you either need to checkout first, or you can use [clean: false](https://github.com/actions/checkout#usage) I think, though I haven't actually tried this.

Fixes #4313


< 2021-11-09 >