Skip to content

Latest commit

 

History

History
752 lines (530 loc) · 43.5 KB

2020-02-26.md

File metadata and controls

752 lines (530 loc) · 43.5 KB

< 2020-02-26 >

2,562,038 events, 1,146,461 push events, 1,858,424 commit messages, 136,183,122 characters

Wednesday 2020-02-26 00:32:43 by Alexander Böhn

This commit is actually brought to you by tos of other code ... which the piquedly intrepid may have the pleasure of perusing at this World Wide Web uniform resource location:

https://gist.github.com/fish2000/e70c387d05fb17b00953a4dd8708fe39

... basically. Also by the way, I hate everything about Ruby, and I think whoever designed the Cap‘n’P CLI should be subjected to a mandatory psych evaluation, with like the full complement of cognitive tests, personality assessments, indefinitely untill whatever the fuck is wrong with their mind is fully diagnosed, isolated, and locked away where no one else can get any more of it on them. YES.


Wednesday 2020-02-26 02:45:39 by Vera Limani

boy howdy i sure hope i fixed that stack overflow bullshit i'm sorry mitu


Wednesday 2020-02-26 11:45:42 by Marko Grdinić

"9:05am. Let me chill for a bit and then I will start. Yesterday went quite well. For once I am learning quite well, and internalizing the material solidly. Hopefully I've gone beyond wasting time looking at random things with this.

9:25am. Ok. Since I am in the mood I won't waste too much time. Let me start now. If I am lucky, I'll be able to go through entirety of part two in time for breakfast. Let me get this thing on.

10:15am. 186/551. Done with chapter 7. Let me continue onwards. There was a bunch new in chapter 7 that I did not see in either the TS docs or the previous books.

10:35am. 202/551. Done with chapter 8. This stuff is not too complicated. I've been through this before.

10:45am. Focus me, focus. Let me get through this chapter.

10:55am. 226/551. Done with chapter 9. Yeah, no way will I finish 200 pages in one morning. But 70 is not bad progress regardless. Let stop here for a while.

11:10am. Actually, let me go on for another chapter and then I will take that break. It is too early to stop right now.

11:35am. 253/551. Skimmed the previous chapter.

12pm. 283/551. The OO stuff is dull, but at least I got through it.

One thing I still do not get is why mutating the prototype of an object does not mutate the prototype itself. When making an object, does the Object prototype get copied.

Just like loop environments, JS is really wonky when it comes to scope. All the references are implicit and the code does not make logical sense as written.

Well, nevermind that. It is not like I am going to do significant programming in TS. I just have to understand it well enough to use it, I certainly won't ever need to mess with under the hood stuff.

12:20pm. 303/551.

filter<V extends T>(predicate: (target) => target is V): V[] {
    return this.items.filter(item => predicate(item)) as V[];
    }

Interesting bit of code. TS is quite powerful to be able to do this.

12:30pm. 312/551. Apart from that interesting tidbit, I admit I skipped most of the last chapter.

I am making really good progress through the book so far. All these advanced features do not really matter, as FP just much better and simpler solutions.

I am going to do this. I am going to master webdev. Lately my thought have been going back to 2016 and that poker game. It was really a pain in the ass to make that GUI for it. Even something as simple as charting the training progress makes .NET shit itself.

The web is much better suited for this kind of work.

This time I am going to do what I could not do back then, and in Pharo.

I am going to do every single thing right. And I am going to draw out the true power of those neurochips.

I've been thinking about getting a job at Intel's neurochip division, but on second thought that sort of thing does not suit my character. Rather than mess with that, I should just inquire whether they would be open to sponsoring my work. I really am a PL researcher at this point, and before I can move on from this phase of my life, I really have to finish it all.

Having a boss telling me what to do is not a role I should be aspiring to. Paid research is a thing in this world.

Before I do that though, I actually want to finish v0.2. I want something that I can show to other people. So my pitch will be a mix of the new, and samples from my old work. I think it should be decently impressive when I show them those Cuda kernels doing AD without a single type annotation in sight.

What I will be applying to is to do what I did in 2018 in Cuda for GPU, to do in their language for neurochips. I think the pitch is good, but it is not a guaranteed success. For all I know, the guys making the hiring decisions might not see the importance of this, or look down on programming. Maybe they'll want to monkey it all the way through. Who knows.

All I really need is that next gen piece of hardware and I will be set. It does not have to be Intel, surely I'll hit 1 out of 100.

Unlike the rest of the ML crowd, I will easily be able to write my own libraries when I have Spiral v0.2 ready. I do not need a framework for this."


Wednesday 2020-02-26 12:19:44 by Daniel Axtens

powerpc: Book3S 64-bit "heavyweight" KASAN support

Implement a limited form of KASAN for Book3S 64-bit machines running under the Radix MMU:

  • Set aside the last 1/8th of the first contiguous block of physical memory to provide writable shadow for the linear map. For annoying reasons documented below, the memory size must be specified at compile time.

  • Enable the compiler instrumentation to check addresses and maintain the shadow region. (This is the guts of KASAN which we can easily reuse.)

  • Require kasan-vmalloc support to handle modules and anything else in vmalloc space.

  • KASAN needs to be able to validate all pointer accesses, but we can't instrument all kernel addresses - only linear map and vmalloc. On boot, set up a single page of read-only shadow that marks all these accesses as valid.

  • Make our stack-walking code KASAN-safe by using READ_ONCE_NOCHECK - generic code, arm64, s390 and x86 all do this for similar sorts of reasons: when unwinding a stack, we might touch memory that KASAN has marked as being out-of-bounds. In our case we often get this when checking for an exception frame because we're checking an arbitrary offset into the stack frame.

    See commit 20955746320e ("s390/kasan: avoid false positives during stack unwind"), commit bcaf669b4bdb ("arm64: disable kasan when accessing frame->fp in unwind_frame"), commit 91e08ab0c851 ("x86/dumpstack: Prevent KASAN false positive warnings") and 6e22c8366416 ("tracing, kasan: Silence Kasan warning in check_stack of stack_tracer")

  • Document KASAN in both generic and powerpc docs.

Background

KASAN support on Book3S is a bit tricky to get right:

  • It would be good to support inline instrumentation so as to be able to catch stack issues that cannot be caught with outline mode.

  • Inline instrumentation requires a fixed offset.

  • Book3S runs code in real mode after booting. Most notably a lot of KVM runs in real mode, and it would be good to be able to instrument it.

  • Because code runs in real mode after boot, the offset has to point to valid memory both in and out of real mode.

    [ppc64 mm note: The kernel installs a linear mapping at effective address c000... onward. This is a one-to-one mapping with physical memory from 0000... onward. Because of how memory accesses work on powerpc 64-bit Book3S, a kernel pointer in the linear map accesses the same memory both with translations on (accessing as an 'effective address'), and with translations off (accessing as a 'real address'). This works in both guests and the hypervisor. For more details, see s5.7 of Book III of version 3 of the ISA, in particular the Storage Control Overview, s5.7.3, and s5.7.5 - noting that this KASAN implementation currently only supports Radix.]

One approach is just to give up on inline instrumentation. This way all checks can be delayed until after everything set is up correctly, and the address-to-shadow calculations can be overridden. However, the features and speed boost provided by inline instrumentation are worth trying to do better.

If at compile time it is known how much contiguous physical memory a system has, the top 1/8th of the first block of physical memory can be set aside for the shadow. This is a big hammer and comes with 3 big consequences:

  • there's no nice way to handle physically discontiguous memory, so only the first physical memory block can be used.

  • kernels will simply fail to boot on machines with less memory than specified when compiling.

  • kernels running on machines with more memory than specified when compiling will simply ignore the extra memory.

Despite the limitations, it can still find bugs, e.g. http://patchwork.ozlabs.org/patch/1103775/

At the moment, this physical memory limit must be set even for outline mode. This may be changed in a later series - a different implementation could be added for outline mode that dynamically allocates shadow at a fixed offset. For example, see https://patchwork.ozlabs.org/patch/795211/

Suggested-by: Michael Ellerman [email protected] Cc: Balbir Singh [email protected] # ppc64 out-of-line radix version Cc: Christophe Leroy [email protected] # ppc32 version Signed-off-by: Daniel Axtens [email protected]


Changes since v7:

  • Don't instrument arch/powerpc/kernel/paca.c, it can lead to hangs. Also don't instrument setup_64.c; it's too early to be safe.

  • Reword this commit message, thanks Mikey.

  • Reinsert some tidier stack walking code, with hopefully some better justification. Certainly it is a common, cross-platform sort of issue.

  • Fix a stupid bug in early printing where I multiplied by SZ_1M rather than divided.

Changes since v6:

  • rework kasan_late_init support, which also fixes book3e problem that snowpatch picked up (I think)
  • fix a checkpatch error that snowpatch picked up
  • don't needlessly move the include in kasan.h

Changes since v5:

  • rebase on powerpc/merge, with Christophe's latest changes integrating kasan-vmalloc
  • documentation tweaks based on latest 32-bit changes

Changes since v4:

  • fix some ppc32 build issues
  • support ptdump
  • clean up the header file. It turns out we don't need or use KASAN_SHADOW_SIZE, so just dump it, and make KASAN_SHADOW_END the thing that varies between 32 and 64 bit. As part of this, make sure KASAN_SHADOW_OFFSET is only configured for 32 bit - it is calculated in the Makefile for ppc64.
  • various cleanups

Changes since v3:

  • Address further feedback from Christophe.
  • Drop changes to stack walking, it looks like the issue I observed is related to that particular stack, not stack-walking generally.

Changes since v2:

  • Address feedback from Christophe around cleanups and docs.
  • Address feedback from Balbir: at this point I don't have a good solution for the issues you identify around the limitations of the inline implementation but I think that it's worth trying to get the stack instrumentation support. I'm happy to have an alternative and more flexible outline mode - I had envisoned this would be called 'lightweight' mode as it imposes fewer restrictions. I've linked to your implementation. I think it's best to add it in a follow-up series.
  • Made the default PHYS_MEM_SIZE_FOR_KASAN value 1024MB. I think most people have guests with at least that much memory in the Radix 64s case so it's a much saner default - it means that if you just turn on KASAN without reading the docs you're much more likely to have a bootable kernel, which you will never have if the value is set to zero! I'm happy to bikeshed the value if we want.

Changes since v1:

  • Landed kasan vmalloc support upstream
  • Lots of feedback from Christophe.

Changes since the rfc:

  • Boots real and virtual hardware, kvm works.

  • disabled reporting when we're checking the stack for exception frames. The behaviour isn't wrong, just incompatible with KASAN.

  • Documentation!

  • Dropped old module stuff in favour of KASAN_VMALLOC.

The bugs with ftrace and kuap were due to kernel bloat pushing prom_init calls to be done via the plt. Because we did not have a relocatable kernel, and they are done very early, this caused everything to explode. Compile with CONFIG_RELOCATABLE!

test

Signed-off-by: Daniel Axtens [email protected]


Wednesday 2020-02-26 12:55:06 by reydlc

Finish README

Oh yeah did I mention I love Garlic Butter flavored Grouper with homemade Garlic Mash potatoes


Wednesday 2020-02-26 13:54:21 by Steve Howell

settings_config: Use getters for encapsulation.

The only place we use settings configuration is when we load the settings pages, so there's no reason that we need to eagerly create these data structures. These data structures are quite cheap to create, so performance isn't the issue here.

If verbosity is a concern, then I will freely admit that the old version:

settings_config.invite_to_stream_policy_values

is in fact more concise than the new version:

settings_config.get_invite_stream_policy_values()

Sorry about that. (We can remove the "get_" prefix if that helps.)

It's also worth noting that one of the data structures here was already protected by a getter: get_all_display_settings(). The reason we used a getter there is that we didn't want every unit test that transitively required settings_config to have to stub out page_params for this all-important line of code not to crash at "require" time:

render_only: {
    high_contrast_mode: page_params.development_environment,
    dense_mode: page_params.development_environment,
},

UPCOMING ISSUES:

All the values exposed in settings_config require i18n, and we'd like to avoid doing that at "require" time for unit testing reasons.

The module settings_config will soon end up being an indirect dependency for more modules. For example, we may have people.js indirectly depend on settings_config in a relatively substantive way (i.e. we need to know the codes related to admin access), but then there are tests that use people that won't ever run through the functions that people needs for checking admin email access, and we don't want to make those tests depend on i18n.

OTHER FUTURE CONCERNS:

In the initial round of reviews (plus some offline chat conversations), it was pointed out that we can bypass the i18n concerns by simply making it a leaf. Here are my thoughts:

- Any change to `i18n` affects 59 modules
  directly, plus many others transitively,
  which feels like a fairly big rabbit hole.

- Absolutely nothing about this change is gonna
  make it harder in the future to reorganize
  `i18n` dependencies.  In fact, this makes it
  easier.

Some of my concerns about going down the i18n rabbit hole are probably PTSD from prior issues with i18n that have since been cleaned:

- `i18n` used to be a major culprit for node
  test slowness, but we mostly mock around
  that now

- `i18n` used to be a major source of page-load
  bugs, but we've since cleaned that by
  aggressively downloading translations instead
  of having strange lazy-load hacks

In 2020 sweeping 59 files for i18n might be a mostly risk-free proposition. I say to whoever wants to do that, "go for it!". This PR will make your job marginally easier!


Wednesday 2020-02-26 14:57:24 by hugo-fg

tree view works ! fuck you QAbstractItemModel and your documentation


Wednesday 2020-02-26 15:03:47 by Ændrew Rininsland

Ostensibly ported o-overlay; see: https://github.com/Financial-Times/spark-lists/tree/master/src/client/components/Origami/Overlay

That said I'm abandoning this PR for now because I don't need modals anymore and this is making me hate my life.


Wednesday 2020-02-26 15:19:14 by Marko Grdinić

"2:30pm. The chores took a while today. Phew. I didn't even slack much, I just wasted my time reading Twitter posts.

But it was fun. US politics is seemingly the best entertainment in the world. Coming right up in the next episode is Corona-chan finding its way into Africa. China did a good job putting everything under lock down, but Africa fill bear the brunt of disease head on as it does not have the social wherewithal to do so.

2:35pm. Let me finally start this thing. I want to finish part two.

I am on page 312/551 at the moment. Only 57 pages until I am done with it.

3:10pm. Had to take a break. Let me resume. Honestly, even if I finish part two ahead of schedule, I do not know I will feel like diving into part 3 today, so let me go at it easy. Maybe take the chance to break the monotony by watching some Wildberger vids after I am done. I've been neglecting this for a while now.

3:30pm.

type resultType<T extends boolean> = T extends true ? string : number;
let firstVal: resultType<true> = "String Value";
let secondVal: resultType<false> = 100;

Oh, so not only does TS support higher kinded types, but it also supports pattern matching on types and values at the type level.

Interesting.

3:45pm. 338/551. This stuff might be interesting, but it is the same sort of trap I've fallen into while doing Spiral. Constraints are the way to go here, not this.

3:55pm. 344/551. Only 25 more pages to go. I mostly skimmed the last chapter, as I do not want to waste too much of my time studying this. I'll come back to this if I need it.

Let me finally go through this last chapter of part two and then the practical stuff will begin.

4:20pm. Damn, that was some awesome thunder. Did not see it coming. I'll have to shutdown.

365/551. Only 4 pages left to go. Agh, let me skim it and then I am done for the day."


Wednesday 2020-02-26 16:01:52 by paul2928

Create CianGalligan.sex

Hi I I'm Cian's girlfriend. Tell him to come home now and collect his shit


Wednesday 2020-02-26 16:14:43 by CoenHolland

Matrix is checked and updated, reaction forces are calculated. They kinda make sense, however 2 are still very high, so we don't trust them. If you look at the plots they are bullshit so we have to go to the TA's first thing tomorrow


Wednesday 2020-02-26 17:32:29 by Rafael Espinoza

fix: Filename interpretation, break Migration interface

Being more flexible is an advantage since different migration tools produce their versions a little differently, and it'd be cool if godfish can work with migration files produced by those other tools. If the timestamp is too long, then the extra numbers would just spill over into the migration name.

feat: Add Version interface, break Migration interface

  • Store version label for timestamp b/c it fixes filename interpretation where the version is too short. Also remove a superfluous return value in parseVersion,
  • Simplify Version implementation. A time.Time is a little more complicated than what's needed in this tool. By reducing the comparable value to the lowest common denominator (works w/ unix epoch values) we can be more flexible with intepreting filenames. We can rely on the Version interface to do the comparing. By default keep the datetime implications, but this is only for formatting. Update comments.

feat: Add Direction alias type, break Migration interface

  • Choose direction label during migration filtering in reverse

fix: List migrations file bugs

  • From the listMigrationsToApply function, the list of applied versions will only be in the forward direction. But when using that list in selectMigrationsToApply, need to manually reset the direction. In this context, we're more concerned with how versions relate to each other.
  • Update makeFilename to be more lenient w/ direction
  • Hoping to leverage globbing when searching for migrations to apply. When you only have the version to go off of, the direction alias is unclear.

fix: connection bugs

  • Did not realize that using the Query method on sql.Connection struct would return a *sql.Rows, which need to be closed. There is no need to do something with the Rows in these Driver methods. The code was ignoring that variable anyways, but the reference was probably still there. An alternative is to use the Exec method on sql.Connection. This returns a sql.Result, which doesn't need to be closed.
  • This error has been prevalant in the mysql driver tests. Here is a brief account of the error: When running the mysql Driver tests, frequently run into error 1040, max connections. It seems that every time the *sql.Connection is referenced, it creates a new connection? The sql.Connection struct is actually a connection pool, and you rarely, if ever need to close it yourself. It might close itself on its own, I don't know. Whatever it may be, I ran one test while in debug mode while monitoring the number of open connections. To my surprise, one call to the godfish.Migrate function opened up 6 connections!

refactor: rename Migration method Name to Label

  • Name in the Migration interface has no functional effects, it's just a convenience label.

refactor: clean up migration parsing functions

  • Using "base" as a variable name is a little confusing. Does it mean a numeric base (confusing when considering version numbers)? Remove unused parameter from parseTimestamp function.

refactor(postgres): Use error code for schema migrations does not exist

refactor: Add migrationFinder type

  • This type is syntactic sugar for organizing the functions that read migration files and select the appropriate ones. There was only one entry point and the remaining methods were used amongst themselves. Now that entry point and the subsequent dependencies are easier to spot. Removed the direction not found errors, those are just annoying.

refactor: remove some unnecessary abstraction

  • The scanAppliedVersions function was set up using a callback pattern so that it could make use of a closure. It turned out that there was not much needed and it could simply be passed in. Also eliminate a helper function b/c there are too many functions used in just one place!

test: Extract stub driver into own package

  • Need a package-level boundary to test parts of the exported API without having to rely on a working database system. This is especially useful for dealing with migration files. Many of, but not all, the tests in versions_test.go are obsolete because the functionality is covered by internal/test.go. A few edge cases would have to be moved over, adapted.

test: Reorganize setup and teardown

  • Refactor driver tests so setup and teardown is better available. It's easier to recreate bugs and test specific functions now. The exported test function has less boilerplate definitions.
  • The test organization is not perfect, but overall there are less assumptions, less magic.

test: Update tests

  • Add workaround to stub test driver
  • remove annoying filename tests
  • Add Driver tests for alternate directions, filenames
  • Pass DB_USER to internal driver test
  • Update test variables and types to be more self-descriptive.

docs: Update README

build: Update Makfile


Wednesday 2020-02-26 17:42:37 by Marko Grdinić

"5:10pm. I am back.

5:30pm. And I am done with lunch and that orange.

Now I can finally start the web chapter, but I feel so drowsy. For these parts I'll definitely need focus. So I'll save most of this for the next few days, but let me at least open it today.

5:50pm. This went by quickly.

// "outFile": "./",                       /* Concatenate and emit output to single file. */

I wonder why I need a bundler if this option exists.

6pm.

ERROR in ./src/index.ts
Module build failed (from ./node_modules/ts-loader/index.js):
Error: Cannot find module 'typescript'

Why does webpack keep giving me this error. I even installed typecript locally.

6:10pm. What a bizarre error. I have no idea what is going on and Google is giving me nothing. Yeah, this is the kind of trouble I'd expect from command line shit, though things have been remarkably smoothly up to this point.

Let me follow the instructions exactly this time.

6:15pm. Oh it works. Most likely the reason why it failed for me previously is because I used tsc --init and that thing uses jscommon modules.

6:30pm. 379/551. The chapter is going quite smoothly, but I think I will call it a day here.

So far, this book has been excellent from start to finish. I am definitely learning here.

I think that once I have an understanding of these projects, I will be able to use them as a template when I turn to studying ASP.NET. I definitely want to understand servers through and through."


Wednesday 2020-02-26 18:12:32 by Tanner

Finalize new PDI image file format

Well, this is it: The Big One (tm). This was the last major blocker before a new stable release, and the major pieces of work are (finally) done. (Some clean-up remains, but it won't affect the format itself - just various helper functions.)

PDI files are PhotoDemon's internal image format. They support saving/loading images with all data necessary to perfectly recreate an image in PhotoDemon.

Why not use an existing file format? Once I added non-destructive text layers to the program, I had no choice but to develop a custom file format, as Photoshop's PSD files do not document editable text layers (ugh) and OpenRaster has no provision for them, either. Opaque formats used by other photo editors (e.g. PDN by Paint.NET) also don't support editable text layers, so if I wanted to support this feature, I had no choice but to develop a custom solution.

The resulting PDI file format has evolved over time, but in the past several years, I've written a number of custom file format parsers for PD. Photoshop PSD was the biggest one by far, and while working on it I learned a lot about the pitfalls of certain design decisions. (PSD as a file format is now 30+ years old, so it's had to evolve to support a ton of different things.) Most importantly, I learned that my current PDI approach - a .zip-like container with a master directory referencing the location of all data chunks in the file - can ultimately cause many, many headaches. The PSD file format has all kinds of horrifying branches and conditions to the file's central directory to cover every variation of a given feature, and they wisely moved later features to a "chunk" system, where data is stored in individual chunks (prefaced by a 4-letter ASCII identifier and a length measurement to the next chunk). Such a chunk approach is also used by other formats, like PNG files (although PNGs add additional stupid complexity, like mandatory checksums for each chunk despite zlib streams already checksumming themselves) - and when I wrote PD's PNG parser, I found myself really liking the "chunk" approach, particularly the way it allows parsers to simply skip over features they don't understand. This approach makes a file format easily extensible and very future-proof, as you can just add new chunks for new features, and old versions of your software requires no modification to deal with (i.e. ignore) those features.

As such, I've long wanted to modernize the PDI file format to something similarly simple, extensible, and fast. Last year, I created a new "pdPackageChunky" interface that implements a very fast, very clean chunk-based packaging approach, with nice optional support for compression on every chunk. Instead of hard-coded enums (which are difficult to modify once coded), short string identifiers are used for all critical features, making the format trivial to extend. I migrated most of PD's internal code - like saving layer undo/redo data - to the new format over the past year, and have been silently testing it via nightly builds. It's proven very robust under a range of different use-cases.

I finally feel like the engine is sufficiently battle-hardened to rebuild the PDI format itself atop it. It's important to note that PDI files are not just used when users save PDI files directly - they're also used by PD itself behind-the-scenes. For example, the Undo/Redo engine generates a PDI file whenever an action affects the entire image (e.g. Image > Flatten image). This means that the new format impacts many places in the program, not just the obvious "save/load PDI files".

Importantly, all previous PDI files are of course still supported! PhotoDemon silently falls back to whichever decoder it needs as it encounters legacy files. The switch should be 100% transparent to users, with the only difference being that the new format is quite a bit faster (at both reading and writing) while also producing smaller files.

I still need to harden the engine a bit against deliberately malformed inputs, which is what I'll be tackling next. Some peripheral functions (such as pd2D object serializers) also need to be updated to make better use of the new format design. But those changes can be made non-destructively, without affecting the core format itself.

What I need now is for nightly version users to test the hell out of this! Because PDI files are silently used everywhere throughout the program, normal program usage is enough to turn up any critical bugs. You don't even need to manually save or load PDI files to do it.

I will of course be testing these new changes like crazy as well, and I've already put a huge collection of both old and new PDI files through a wide battery of tests. (I wouldn't be merging this work otherwise!)

Anyway, I'm writing this long commit message to remind myself of what led to this decision... just in case I deeply regret it in the future. 😉


Wednesday 2020-02-26 19:11:10 by nia

srb2: Update to 2.2.2

Changes in 2.2:

General

Slopes have been implemented into almost the entire campaign, including support for launching off of quarterpipes. No, before you ask, loops still aren’t possible.
An enormous number of graphics and textures have been updated or redone. Highlights include the title screen, Sonic and Tails, with separate sprites of Tails’s tails for optimum mofumofu.
Practically the entire soundtrack has been redone.
Character sprites now face the direction the player’s control inputs point instead of in the direction the camera is facing.
Automatic braking, a new assist feature, has been added. While enabled, releasing the controls will cause the player’s character to attempt to stop instead of coasting forward.
Tails’s AI has been significantly improved in Sonic & Tails mode, including allowing him to be commanded to fly you without using player 2’s controls.
The attraction, elemental, and force shields now have a jump-spin ability like whirlwind and armageddon already did.
Continuing the game after getting a game over now starts the player with more lives for each continue used.
The intro cutscene has been revamped with brand-new art, and the game now has a short ending sequence.

Levels

Arid Canyon Zone Acts 2 and 3 now exist.
Almost the entire rest of the campaign has been remade from scratch or significantly updated.
Several bosses have had their arena and behavior updated.
The final battles have been adjusted to make losing not kick the player all the way back to the beginning of Eggrock.
A short, optional tutorial stage has been added.
Two stages previously from the OLDC have been included as unlockables.
Cooperative mode now uses the old 2.0 special stages, which have been slightly updated to be more multiplayer-friendly.

Interface

The menus have been massively revamped to both look better and be easier to understand.
Controls, menus, and various other things have been renamed to make them easier to understand.
Record attack now has HUD elements to display the buttons being pressed both during gameplay and while watching replays.
Multiple accessibility features have been added, including closed-captioning and the ability to adjust the palette at runtime to add contrast to aid colorblind players.

Engine/Editing

The palette has changed again to provide slightly more diverse color options
Music no longer restarts from the beginning after an interruption, such as getting an extra life.
Plugging in a controller during gameplay will allow that controller to be used instead of requiring the game to be restarted to recognize it.
Added support for paper sprites, which allow the sprite to be rendered as if it’s on an upright piece of paper, becoming thinner when viewed at an angle and disappearing entirely when viewed from the side. Think Paper Mario.
Textures can now be used as flats (but not vice-versa).
Skybox rendering has been significantly optimized.
PNG images can now be used as graphics, at any resolution (but not too high or you’ll run out of memory).
Sprite rotation is now supported.
libopenmpt support
Added support for the MD3 model format.
So many Lua changes we couldn’t possibly hope to list them all here.

Wednesday 2020-02-26 19:56:13 by ADejbakhsh

User profil management now available Few thing to note

  1. Oauth user can change all the info like a normal user
  2. A lot of indentation happen sorry 😅
  3. There is a function ready to be linked once img feature is done
  4. I know that implementation is ugly but smart thing were unreadable or hack like. so i decided that the simpler is better
  5. Talking about hack i had to use way Jerome to get user id without typescript blocking me. Someday we will have to configure typescript to accept req.user.id

Wednesday 2020-02-26 20:30:36 by Chaosvolt

Improvements to spell mechanics

  • Removed the feature where learning Arcane Blessings overrides the relevent Magic Sign, while retaining the feature where it displaces Sanguine Marks. While this will clutter up the mutation and spell menus a bit more, it enables a fuckton of features I previously couldn't utilize due to how finicky this mechanic was.
  • Set pattern scrolls to grant the exact specific trait being targeted via MUTATE_TRAIT, instead of pinging a custom category. While this allows me to remove all those annoying one-note mutation categories and makes pattern scrolls actually work 100% of the time instead of sometimes randomly failing, by far the BIGGEST benefit to this is that using the same pattern scroll more than once will trigger a levelup of that spell (since the mutation grants the target spell at level 1), meaning going through the expense of crafting more pattern scrolls can be used to sidestep the tediousness of having to grind EXP via casting. This helps with solidifying the intended balance where it's easier to max out Magic Signs, but Arcane Blessings are more powerful when maxed out.
  • Shaved max level of Sanguine Marks down to 5 and tweaked their effect scaling accordingly, since they're still the only spells that lack any good way to quickly gain experience.
  • Gave some of the major NPCs a selection of spells they can teach. This feature isn't yet complete however.

Wednesday 2020-02-26 20:44:12 by PIANO4DAYZ

so welcome my crankkky crew this is me commiting from the command line and i gotta say, the black background is pretty cool, very no color. so im writing right now a commit message for a file that makes commit messages, inception time. while you are reading this, what is your opinion on the fragility of human life and existence? oh, you think that it kinda sucks? i see. that is a good opinion. but do tell me, why does it suck, when in reality, only 50% of doctors reccomend life as a vaccum? oh, the humanity. but anyways, in the wise words of a person who is addicted to accordions in our 7th period class, and talks of german culture, "have a day".


Wednesday 2020-02-26 22:21:48 by Mark B

Direct upload of CAFM prime MS Word file

The aim of this primer is to provide sufficient operational information for a user to start working with advanced conductive atomic force microscopy (CAFM) procedures, such as tomography and constant bias time-domain measurements. It is most relevant to Bruker ICON/Veeco Dimension instruments, although many of the practises will be generally applicable to AFM/CAFM measurements. It is worth noting that the documentation/help files for Nanoscope and Nanoscope Analysis are very detailed and extremely useful. I would thoroughly recommend making use of them. I have not referenced material within the text, but I have included a selection of useful references at the end that are relevant to much of the technical discussion.

The details contained in this document combine the training I received when I started my EngD at UCL in 2013 with my experience using AFM and CAFM. Much of the information is based on approaches that I found worked during my experimental research, but they were not necessarily based on well-defined methodologies (although they were generally based on some modicum of science and/or scientific practise). As such, if you find alternative, more effective approaches, then I would recommend using them!

This primer describes measurement processes, from module and probe setup through software implementation and data analysis. It generally concerns contact-mode measurements (typical for CAFM) although some tapping-mode procedures will be mentioned. Notably, this is not an exhaustive training document for CAFM. Rather, it is written for users who already have some experience with AFM at least, in that they do not need to learn the theory and have a reasonable grasp of a typical scanning probe workflow. It should be used as a reference document rather than a training manual. Ideally, the user should take the information contain within and develop their own practical understanding and expertise, for which there is no substitute or alternative click-and-go approach. However, there should be sufficient detail here for a user to follow the workflow they need for their measurement.

Please email me with any questions or concerns, as I’ve certainly forgotten some details, described things poorly/incorrectly, or left some parts otherwise incomplete.

Enjoy!


Wednesday 2020-02-26 22:52:45 by Mark B

Direct upload of pdf.

The aim of this primer is to provide sufficient operational information for a user to start working with advanced conductive atomic force microscopy (CAFM) procedures, such as tomography and constant bias time-domain measurements. It is most relevant to Bruker ICON/Veeco Dimension instruments, although many of the practices will be generally applicable to AFM/CAFM measurements. It is worth noting that the documentation/help files for Nanoscope and Nanoscope Analysis are very detailed and extremely useful. I would thoroughly recommend making use of them. I have not referenced material within the text, but I have included a selection of useful references at the end that are relevant to much of the technical discussion.

The details contained in this document combine the training I received when I started my EngD at UCL in 2013 with my experience using AFM and CAFM. Much of the information is based on approaches that I found worked during my experimental research, but they were not necessarily based on well-defined methodologies (although they were generally based on at least some modicum of science and/or scientific practice). As such, if you find alternative, more effective approaches, then I would recommend using them!

This primer describes measurement processes, from module and probe setup through software implementation and data analysis. It generally concerns contact-mode measurements (typical for CAFM) although some tapping-mode procedures will be mentioned. Notably, this is not an exhaustive training document for CAFM. Rather, it is written for users who already have some experience with AFM at least, in that they do not need to learn the theory and have a reasonable grasp of a typical scanning probe workflow. It should be used as a reference document rather than a training manual. Ideally, the user should take the information contained within and develop their own practical understanding and expertise, for which there is no substitute or alternative click-and-go approach. However, there should be sufficient detail here for a user to develop the workflow they need for their particular measurement.

Please email me with any questions or concerns, as I’ve certainly forgotten some details, described things poorly/incorrectly, or left some parts otherwise incomplete.

Enjoy!


< 2020-02-26 >