Skip to content

Latest commit

 

History

History
1370 lines (1025 loc) · 68.7 KB

2021-01-08.md

File metadata and controls

1370 lines (1025 loc) · 68.7 KB

< 2021-01-08 >

2,595,468 events, 1,324,338 push events, 2,091,069 commit messages, 165,067,031 characters

Friday 2021-01-08 00:13:25 by ms-mirror-bot

[MIRROR] NOTELEPORT checks that were atomized out of the medieval shuttle (#122)

  • NOTELEPORT errywhere (#55973)

These prevent some cheats or really low effort ways to get to where you really shouldn't be.

Mappers seriously fucking hate jaunting and phasing mechs, as they let you bypass their custom crafted ruins and the like. But it'll also stop more general "you shouldn't be here" stuff.

  • NOTELEPORT checks that were atomized out of the medieval shuttle

Co-authored-by: tralezab [email protected]


Friday 2021-01-08 00:54:54 by Jonas Arnfred

Add type-checking

This change adds type-checking to the kind compiler. The principle of the type-checker (and of the scanner module it replaces) is to compute the possible outputs of a function given a subset of possible inputs. We do this by interpreting the function syntax tree and track:

a) The possible values of any expression in the function body b) Whether the possible values of an expression can be satisfied by a pattern

Part a) gives us a domain function, which given input domains can tell us the output domain of a function or type. Part b) gives us a type-checker in that we can check at compile time if there are patterns that can't be satisfied by the expressions assigned to them.

Current issues

This logic was already to a large part implemented in the scanner module, but adding type-checking proved difficult because of several issues:

  • Logical errors in generated types: There might have been several issues (and I suspect there was), but one that comes to mind is that pattern matching as implemented by pattern_gen didn't work correctly for types. In code execution, the pattern would be evaluated top to bottom, and the first pattern which is a subset of the input domain would be returned. However for a type, the result is a sum of all the patterns that are a subset of the input type.
  • Code rot: The scanner module passed unit tests, but I had never wired it in to the compiler in a way that exercised the code paths and the code was no longer up to date
  • Error ambiguity: The scanner was ambigious in what it considered a type error. There are several different error interpretations, all useful at different times, but the interpretation I had implemented in the scanner was a mixture and suited none of the actual use cases
  • Scanner didn't type-check type functions: While defs where scanned and checked for errors, types (which are fundamentally equally complex) weren't because of architectural limitations.

Specification

Kind type checks expressions by computing the domain of all possible values the expression can possibly evaluate to at run-time. Any patterns in the expression serve to checking if the domain is valid. In the strictest sense, "valid" means that the possible domain of values is a subset of the pattern domain.

Types in kind are expressions that are replaced by their domain at compile time.

Types are either type constants or type constructors. A type constant is a value like any other. Any value is equivalent to the domain of said value. Type constructors are partial functions that define domains. That means that when you call a type constructor it will return a domain, unless it's undefined for the arguments. In this case, the type checker will return an error.

Defs like types are also partial functions. During compile time, we find any invocation of a def and compute the resulting domain. If the def is not defined for its arguments, an error is thrown by the typechecker.

Domain traversal

The typechecker computes the output of a def or type constructor by traversing the AST of the function and tracking the domains of all variables. A variable is assigned the any domain if nothing else is known about it, but if this variable is matched in a pattern, the domain is constrained to the values of the pattern. When the variable is later referenced in an expression, the domain of the variable known so far is used to compute the domain of the expression. For function calls, this behavior is repeated recursively.

Error Modes

The typechecker defines three error modes:

  • Strict: The domain of a matched expression should be a subset of the matching pattern
  • Normal: The domain of a matched expression should intersect the matching pattern
  • Lenient: When the domain of a matched expression doesn't intersect a matched pattern, the none domain is returned instead of an error.

The strict error mode guarantees that the partial function of a def or type constructor will never see an undefined value. It does so by throwing an error if there's any values in a variable domain for which a pattern isn't defined. This strictness mode is useful for answering the question: "Is my function defined for this value and what is the output domain?"

The normal error mode (in lack of a better name), is more permissible. It allows us to compute the output domain as long as the partial function for def or type constructor is defined for some values in the input domain. This more permissive mode of type scanning is suitable for answering the question of "For any permissive value in this domain, what is the output domain of this function" and still see an error if no values were permissible.

Finally the lenient error mode will return the domain none instead of returning an error. This is useful in cases where we want to understand the domain of a partial function, but don't wish to return any errors.

Applying type checking

A big limitation in this particular approach to type checking is the fact that there is no language for specifying the general type of a def. It's very straightforward to define a partial function where only certain combinations of arguments are valid. In fact, parameter combinations that the author of the code didn't anticipate or design their function for is often the cause of undefined behavior or runtime errors, and it's often non-trivial to capture these constraints in a type system that considers the parameter types as independent. However, the implication is that it's often non-trivial to fully specify the valid inputs to a function.

It's feels like it should be possible with a smarter algorithm than the one I've fashioned, but as for now, we don't have an easy way to determine which combinations of input values are valid.

As a consequence we lack the means of running a strict type-check on a function, since we don't know the domain for which it is defined. Instead we find the output domain when the input given is any and run the type checker in normal mode, much to my chagrin.

Erlang interop

For the case of erlang functions the type checker will return the output domain of any without errors, except if the follwing criteria are met:

  • The domain inputs are all value domains (i.e. normal values)
  • The erlang function in question has been whitelisted as pure

In this case the output domain of the erlang function is computed by running it at compile time on the given input.

Implementation

We run the type checker at compile time and run it on the Kind language AST. In theory it should be straight forward to implement a pass where all types are evaluated and contain no unapplied type constructors. However in practice I haven't implemented this pass, and as a consequence we need to be able to evaluate the domain of type functions (which in turn might call normal functions for which we'll need to evaluate the domain as well).

Because any type constructor can call any other type constructor and because the domain of a type constructor can only be decided given the type parameters, we need an environment of domain functions when we evaluate a domain, including of course the domain function for the domain we're currently evaluating in case there's a recursive call.

Before this refactor we implemented this environment at compile time by creating an environment of clojures covering over the code and paramters needed to compute domain functions. It was gnarly as fuck, hard to understand, and quite need in a code-machochistic sort of way.

Because of the requirement to be able to run domain functions at run-time, we need to compile the domain functions to erlang core and compile them to beam modules. This provides a much simpler solution to the recursive environment problem, since we can just call the compiled functions at runtime.

Implementing the typechecker responsible for computing domains in erlang core would have been a slow and error prone process though. I'm possibly speaking from an ignorant perspective, but errors like forgetting to wrap an atom in cerl:c_atom are time consuming to debug, because the erlang compiler gives me no useful error messages. I'm usually left to comb through the erlang core AST and see if I can spot any problems instead. So to keep erlang core code to a minimum, I've instead opted for the approach of storing the Kind AST in erlang core form and execute a call to the typechecker module passing the hardcoded AST as a parameter. I know it's a hack, but it saved me a lot of work, so I'm kind of pleased with the solution for what it is.

Module Creation

In the domain_gen module we create a set of modules covering domain functions for the types and defs defined in the source file:

  • Root module: The root module defines three domain functions for each type and def defined in the source file, no matter if they are exported or not:
    1. A domain function with the same arity as the origin function returning the domain given the domain of the arguments
    2. The same as above but with an additional strictness argument being either strict, normal or lenient prepended. For more discussion about strictness, see this commit message under "Error Modes"
    3. The same as above but with a stack arg prepended. The "stack" when evaluating a domain function serves to improve error messages as well as catcing recursion. This form of the domain function is called from the typecheck module. When a domain function makes a call to another domain function, this call is made to the root module because all other functions are available in the root module.
  • Type modules: For each type constructor which defines new type constants in the function body, we create a separate module type module containing the parent type and all sub types defined in the function body. These modules makes it possible to import types with the same code as ordinary imports.
  • Def modules: For each module defined in the source file, we define an equivalent domain module (using the name of the module with _domain appended). The domain module contains the same domain functions as defined by the root module, but only include functions exported in the kind module.

Hurdles / Points of note

Handling Recursion

To compute the domain of a function, we need to evaluate all branches of said function alongside all calls to other functions. This makes means any uncaught recursion will result in the typechecker looping forever at compile time.

To handle recursion, we pass a stack along as we recurse through the AST. The stack is similar to a runtime call stack in that each time we compute the domain of a function called in the AST, the name of the function is added to the stack. This enables us to check if a function has been seen before before we compute the domain for it.

If a function has been seen before, we return a recursive domain of the shape {recur, F} where F is a function with zero arguments which returns the domain if evaluated.

Recursive domains are a special case when handling the union of two domains. If the domain is a sum of two or more elements where some (but not all) are recursive domains), the union of said domains exclude the recursive domains. We use the union of domains with pattern cases, and this allows us to compute the domain of a function by looking only at the non-recursive cases. This is useful because It's a bit of an ad-hoc approach, and I worry it'll come back to bite me, but for now it makes the code quite straightforward.

Whitelisting

The domain output of an erlang function is generally any, but if the domain arguments to the function are all values and the erlang function doesn't have any side-effects, we compute the output domain by calling the function. This allows for example the output domain of 1 + 1 to be 2, which is useful in a bunch of cases.

Erlang doesn't have a clear notion of functions without side effects, so instead I coopt a module/function whitelist I originally made for running code online for this purpose. The original whitelist was made to sandbox arbitrary code execution by making sure (or at least trying to make sure) that no function could be called that could compromise the host where the code was running. Side-effect free functions fit the bill, but I shan't say that there aren't a few functions out there in the erlang libraries that are safe to run, but not side-effect free. It's not something I've spent long thinking about, but I suspect that I'll have to at some point.

Pattern Matching Codegen

TypeEnv is generated as part of domain_gen. It's a map of type tags to forms. Tricky part is that using forms for the typesEnv makes it a bit more difficult to generate pattern matching because the form is a call to the root domain module and calls aren't allowed in patterns, so instead I need to make the call to the root domain module while compiling and translate the resulting domain to a pattern


Friday 2021-01-08 02:38:48 by ejsvifq_mabmip

add lc_scatter_printable and lc_scatter_type_printable concepts. remove passing size in print_reserve_define. It is a prediction and has nothing to do with logic of the function. add experimental initial support for chrono with l10n

C++ 20 chrono considered harmful Howard Hinnant is an OOP loser and has no idea of how computer architectures, operating systems, and compilers work.

WTF you want both iso_encoding() and c_encoding() with weekday? January? Why does the identifier in the standard library start as the upper case?

Unfortunately, WG21 is full of garbage like him. What the fuck of all these interfaces which passing const references? Reference to unsigned short??? FUCK FUCK FUCK FUCK FUCK.

std::chrono::year stored as short while returns int? It only works with the year range [-32768,32767]. Why does WG21 want another Year of 2000 dumbshit? You might say 32768 AD does not make sense since a human would probably disappear, but what about 32769 BC??

What about std::chrono::parse? WTF you parse date time with locale? C++ locale does not even work correctly.

Of course, Herb Sutter is another loser who advertises int which is proven harmful for enormous time. Even today, OpenSSL BIO contains legacy garbage like this. C++ Core Guidelines Considered harmful. It is Google C++ Coding Style 2.0.

FUCK FUCK FUCK FUCK FUCK FUCK. WTF of the shit gets added into C++20? Modern C++ is just objectively harmful.


Friday 2021-01-08 04:17:15 by Keno Fischer

Turn off client-side port reuse on Darwin (#38901)

For scalability, in the Distributed code and when supported by the operating system, we bind all client sockets to the same port (the server ports still differ). In general, each tcp connection is identified by the 4-tuple (source_ip, source_port, dest_ip, dest_port) (also known as a 5-tuple if the protocol is included). Re-using the client port saves on the number of available ports, but since the 5-tuple is still different due to the varying server ports, it doesn't end up causing any trouble.

However, on Darwin, we run into a bit of a pickle. When a connection exits, the server side of the socket enters TIME_WAIT state, while the client side of the socket immediately enters CLOSED state allowing the port to be reused (if we weren't setting SO_REUSEPORT anyway). Now, ordinarily the server port number is not allowed to be re-used if there is any remaining PCB (Process control block, basically the kernel data structure associated to a 5-tuple), that references said port (including those in TIME_WAIT state). However, this is very annoying for servers in general (since it would mean that servers can't be restarted for some number of seconds until after the last connection terminated), so it is common to pass SO_REUSEADDR on server sockets. In fact, libuv does that for us automatically. Unfortunately, as a result, there is nothing that prevents us from re-using the same 5-tuple before the TIME_WAIT state has timed out (the client just got immediately recycled, and we explicitly bypassed the TIME_WAIT restriction on the server). Linux appears to handle this fine, but Darwin does not. On Darwin, when we try to connect with a re-used 5-tuple, there is some chance (depending on the kernel hash table for PCBs), that we will get the state PCB rather than the fresh one, causing the connection to drop. This is the cause of #38812 and our mac CI reliability issued in the Distributed test.

It is worth pointing out that this is not directly related to SO_REUSEPORT. Since the client port is immediately available for recycle, it is perfectly legal for us to re-use that port immediately, even if SO_REUSEPORT is not set (which would exhibit the same problem). However, because doing that is not particularly useful, we just randomize the port if SO_REUSEPORT is not set. This doesn't fix the situation either, but should hopefully reduce the incidence rate below the rate at which it is problematic for CI.

We could turn off SO_REUSEADDR on the server entirely, but that is undesirable for reasons mentioned above. Since Linux does not have the same issue, I have some hope that Apple might consider this a bug and adjust the behavior. I have an open support request with them to that extent. However, in the meantime this will hopefully help our CI reliability.


Friday 2021-01-08 05:27:34 by Harald Sitter

kbd preview: don't fall over unfortunate arg combos

as it turns out it's fairly easy to fail the original xkb assert when using variants with certain models that don't actually sport support for the variants

e.g. --model applealu_iso --layout gb --variant mac_intl

to deal with this we no longer assume that geometry compilation will actually work and instead switch the qml ui into an error state when problems appear. furthermore we'll run the arg combination through setxkbmap|xkbcomp to get a sense of why the server might have failed to compile the geometry and use that as detailed error output

I need to point out that the preview failing doesn't necessarily mean the layout application as a whole will because there's different code paths the kcm and the kded will take to apply layouts so depending on which path is taken the configuration may be partially applied or not at all (e.g. with the scenario from above the end result may be that model doesn't get applied but the layout and variant will)

since these changes required some rejiggering of the pointers and their life times this has also seen some related cleanup. notably the geometry is now in charge of the life time of the root xkb object and the previous qsharedpointer cleanup (which was really scoped pointer cleanup but scoped pointer has kinda sad api for custom deleters) is now instead a unique_ptr with custom deleter.


Friday 2021-01-08 07:10:57 by Erik Skultety

hostdev: mdev: Lookup mdevs by sysfs path rather than mdev struct

The lookup didn't do anything apart from comparing the sysfs paths anyway since that's what makes each mdev unique. The most ridiculous usage of the old logic was in virHostdevReAttachMediatedDevices where in order to drop an mdev hostdev from the list of active devices we first had to create a new mdev and use it in the lookup call. Why couldn't we have used the hostdev directly? Because the hostdev and mdev structures are incompatible.

The way mdevs are currently removed is via a write to a specific sysfs attribute. If you do it while the machine which has the mdev assigned is running, the write call may block (with a new enough kernel, with older kernels it would return a write error!) until the device is no longer in use which is when the QEMU process exits.

The interesting part here comes afterwards when we're cleaning up and call virHostdevReAttachMediatedDevices. The domain doesn't exist anymore, so the list of active hostdevs needs to be updated and the respective hostdevs removed from the list, but remember we had to create an mdev object in the memory in order to find it in the list first which will fail because the write to sysfs had already removed the mdev instance from the host system. And so the next time you try to start the same domain you'll get:

"Requested operation is not valid: mediated device is in use by driver QEMU, domain "

Fixes: https://gitlab.com/libvirt/libvirt/-/issues/119

Signed-off-by: Erik Skultety [email protected] Reviewed-by: Ján Tomko [email protected]


Friday 2021-01-08 07:16:41 by Jeff King

ahead-behind: do not die when we see no INTERESTING pending object

We currently die if we are fed an ahead/behind with zero objects (foo..foo in the most basic case, but in practice something like foo@{upstream}..foo, when foo has just been merged). The problem is that we let handle_revision_arg parse it, and then pick the pieces out of the pending object list. So "^foo" looks no different to us there than "foo".

This patch hacks around it by picking up the UNINTERESTING object in that case. However, this isn't great because:

  1. Now we won't notice some types of bogus input.

  2. We end up reporting the name of the UNINTERESTING object.

We probably should pick apart the ".." ourselves, or even just change it to ":" or whitespace.


Friday 2021-01-08 09:06:39 by Ondřej Budai

distro/rhel84: use a random uuid for XFS partition

Imagine this situation: You have a RHEL system booted from an image produced by osbuild-composer. On this system, you want to use osbuild-composer to create another image of RHEL.

However, there's currently something funny with partitions:

All RHEL images built by osbuild-composer contain a root xfs partition. The interesting bit is that they all share the same xfs partition UUID. This might sound like a good thing for reproducibility but it has a quirk.

The issue appears when osbuild runs the qemu assembler: it needs to mount all partitions of the future image to copy the OS tree into it.

Imagine that osbuild-composer is running on a system booted from an imaged produced by osbuild-composer. This means that its root xfs partition has this uuid:

efe8afea-c0a8-45dc-8e6e-499279f6fa5d

When osbuild-composer builds an image on this system, it runs osbuild that runs the qemu assembler at some point. As I said previously, it will mount all partitions of the future image. That means that it will also try to mount the root xfs partition with this uuid:

efe8afea-c0a8-45dc-8e6e-499279f6fa5d

Do you remember this one? Yeah, it's the same one as before. However, the xfs kernel driver doesn't like that. It contains a global table1 of all xfs partitions that forbids to mount 2 xfs partitions with the same uuid.

I mean... uuids are meant to be unique, right?

This commit changes the way we build RHEL 8.4 images: Each one now has a unique uuid. It's now literally a unique universally unique identifier. haha


Friday 2021-01-08 09:28:13 by Noé

The re of python is bullshit. I hate it.

Spent 10 hours trying to debug what came from my own stupidity while it is actually a problem from the feature of the re extension. This is why everyone used regex instead.

https://www.reddit.com/r/adventofcode/comments/kg1mro/2020_day_19_solutions/ggd6xt5/ see magic if line at the end


Friday 2021-01-08 10:45:55 by bors[bot]

Merge #3415

3415: Better Sliders(V) / Lists r=def- a=Banana090

Horizontal sliders stays the same

THESE FIXES ARE TEMPORARY PR contains some shitty hotfixes which I'm planning to rework later. The reason of them being shitty is that the source code for it was shitty already, simple as it is. The whole list idea should be reworked in order to make the code more clearer and nicer. Will do later. Also I'm planning to make much better mouse scrolling for lists, not the crappy "teleporting" thing we have right now, so anyway I will back to the lists later.

Basically, I'm reworking all the UI tabs one by one. Now I am making the new "Tee" tab, which will combine "Player" and "Tee" setting tabs in one. As well as include improved skin selector with much more clear view on skins and so on... And I need this list update for it.

"Tee" tab update turns out to be a big one, so I decided trying to split it into several PRs. So lists and sliders goes first.

FIXES/UPDATES:

  1. Slider is not flickering anymore when scrolling mouse wheel at the min/max slider position
  2. Cool visual change of course, how can I go without that?
  3. Slider is not shown for List if all list elements are visible in rect
  4. Disabled List footers (useless and ugly)
  5. Disabled List headers (useless and ugly)

Screenshot_1

Checklist

  • Tested the change ingame
  • Provided screenshots if it is a visual change
  • Tested in combination with possibly related configuration options
  • Written a unit test if it works standalone, system.c especially
  • Considered possible null pointers and out of bounds array indexing
  • Changed no physics that affect existing maps
  • Tested the change with ASan+UBSan or valgrind's memcheck (optional)

Co-authored-by: Дядя Женя [email protected] Co-authored-by: def [email protected]


Friday 2021-01-08 11:27:33 by Marko Grdinić

"10:45am. Let me chill a little. Today I woke up too early. Let me read the first chapter of that Retsu isekai manga.

11:25am. Let me do some work. Let me fix that bug from yesterday.

Then I will have breakfast and do the chores.

    let generalize r scope (forall_vars : Var list) (body : T) =
        let h = HashSet(HashIdentity.Reference)
        List.iter (h.Add >> ignore) forall_vars

Let me get rid of this HashSet. Just what the hell was I thinking here?

    let generalize r scope (forall_vars : Var list) (body : T) =
        let generalized_metavars = ResizeArray()
        let rec replace_metavars x =
            let f = replace_metavars
            match x with
            | TyMetavar(_,{contents=Some x} & link) -> f x
            | TyMetavar(x, link) when scope = x.scope ->
                let v = {scope=x.scope; constraints=x.constraints; kind=kind_force x.kind; name=autogen_name !autogened_forallvar_count}
                incr autogened_forallvar_count
                link := Some (TyVar v)
                generalized_metavars.Add(v)
            | TyVar _ | TyMetavar _ | TyNominal _ | TyB | TyPrim _ | TySymbol _ -> ()

Let me give this a try.

11:30am. Yeah, it worked. Things like this happen when it is one's first time doing something.

Now let me make a recursive wrap.

inl wrap (b,a) (pu p) =
    pu {size = a >> p.size
        pickle = fun x state => p.pickle (a x) state
        unpickle = fun state => b (p.unpickle state)
        }

inl wrap' (b,a) p =
    pu {size = fun x => inl (pu p) = p() in p.size(a x)
        pickle = fun x state => inl (pu p) = p() in p.pickle (a x) state
        unpickle = fun state => inl (pu p) = p() in b (p.unpickle state)
        }

This should do it.

inl list (pu p) =
    inl (pu i32) = I32
    pu {size = list.fold (fun s x => s + I32Size + p.size x) I32Size
        pickle = fun x state =>
            let rec loop = function
                | Cons: x, xs =>
                    i32.pickle 0 state
                    p.pickle x state
                    loop xs
                | Nil =>
                    i32.pickle 1 state
            loop x
        unpickle = fun state =>
            let rec loop () =
                match i32.unpickle state with
                | 0 => Cons: p.unpickle state, loop()
                | 1 => Nil
                | _ => failwith "Invalid tag."
            loop ()
        }

Now let me back this up, and I will reimplement it in terms of alt.

inl rec list p =
    alt (function Cons: _ => 0 | Nil => 1) (
        wrap' (cons_,fun (Cons: a,b) => a,b) (fun () => pair p (list p)) ::
        wrap (nil,fun Nil => ()) Unit ::
        Nil
        )

Quite a gain in concision.

11:45am. I am sold. wrap and alt really make a large difference in concision.

...Ah, crap. Now I remember why I had that convoluted scheme in generalize. That is in order to generalize mutually recursive statements.

Let me backtrack a little.

    let generalize r scope (forall_vars : Var list) (body : T) =
        let h = HashSet(HashIdentity.Reference)
        List.iter (h.Add >> ignore) forall_vars
        let generalized_metavars = ResizeArray()
        let rec replace_metavars x =
            let f = replace_metavars
            match x with
            | TyMetavar(_,{contents=Some x} & link) -> f x
            | TyMetavar(x, link) when scope = x.scope ->
                let v = TyVar {scope=x.scope; constraints=x.constraints; kind=kind_force x.kind; name=autogen_name !autogened_forallvar_count}
                incr autogened_forallvar_count
                link := Some v
                replace_metavars v
            | TyVar v -> if scope = v.scope && h.Add(v) then generalized_metavars.Add(v)

How about I just do this instead.

Yeah, this is the correct fix. As can be seen, even if one knows what the problem is, the obvious thing to do is not always the right one.

11:55am. Things are good.

Now that I have this, the next step would be to actually try it out.

Let me make a test.

inl main () =
    inl scheme = pair I32 (pair I32 (list (record_qwe (pair I32 (pair String Char)))))
    ()

Ah, I went too fast. Before I can do anything I need the serialize and deserialize functions.

inl serialize (pu p) x =
    inl ar = array.create (p.size x)
    ()

This is not giving me a missing metavar error even though it should.

inl serialize (pu p) x =
    open array
    inl ar = create (p.size x)
    inl i = mut 0

    ()

Now it is telling me that array is an unbound variable. Something is seriously wrong here.

12:20pm.

nominal serialized a = array i8

inl serialize forall t. (pu p) x =
    inl size = p.size x
    inl ar = array.create size
    inl i = mut 0
    p.pickle x (i,ar)
    assert (*i = size) "The size of the array does not correspond to the amount being pickled. One of the combinators is faulty."
    serialized ar : serialized t

inl deserialize forall t. (pu p : pu t) (serialized x : serialized t) : t =
    inl i = mut 0
    inl r = p.unpickle (i,x)
    assert (*i = array.length x) "The size of the array does not correspond to the amount being unpickled. One of the combinators is faulty or the data is malformed."
    r

inl main () =
    inl scheme = pair I32 (pair I32 (list (record_qwe (pair I32 (pair String Char)))))
    inl x = 1,2,({q=1;w="a";e='z'} :: {q=2;w="s";e='x'} :: Nil)
    assert (x = deserialize scheme (serialize scheme x)) "Serialization and deserialization should result in the same result."

I wrote this out, but I do not want to run it right now. I'll leave that kind of debugging for later. First, let me restart the server to see if this array issue persists.

Wow, the metavar errors do not happen when it is a module application. That is ridiculous. How did I miss this for so long. Also I do need to get to the bottom of why open array fails outright.

Yeah, it persists. Let me make a test just for this.

open array
inl main () = ()

Yeah, in a separate test I do get that the array is unbound. But accessing the array function does work. It sees it properly in the hover.

12:25pm.

// Does the open array work? Do unapplied metavars errors happen on module application?
open array
inl main () =
    inl x = array.fold
    ()

Ok, good. I have my next task in this. After that I will try running the serialization test. That should tell me everything I need to know.

After I deal with this, I'll finish the documentation section.

Time for breakfast."


Friday 2021-01-08 11:39:09 by Neo

Failure soon will lead to success Even if it's excessive Escape will be impressive Pain is a weakness So I'm doing the best I can Victory is sweetness That's why I'm a Stick Man with the plan

Don't be afraid We're the ones who'll help you find the way So much to say But don't be here to stay I woke up in despair I look ahead, beware To find three little ghosts in front of me, I'm scared I look around to see if anyone else can see But no one hears my pleas Or cares to hear me scream I doubt that people care Alone and so aware And now by myself, I'm left with all this pain to bear With shivers going down my spine and ghosts long gone dead But then this is what they said Don't be afraid We're the ones who'll help you find the way So much to say But don't be here to stay Help me Someone please come and help me Need somebody to tell me Please, please What the hell is going on Crazy I might be going crazy Need someone here to tell me Please, please How do I get along? You're nothing but a waste You should know your place Just like those demons in my head, it fills up my brain All I am doing here is begging as I'm trying my best But it becomes a mess The fire in my head It's overwhelming me But then I cannot flee I'm stuck here all by myself, alone again it seems With all this pain inside my head and ghosts long gone dead But then this is what they said Don't be afraid We're the ones who'll help you find the way So much to say But don't be here to stay Help me Someone please come and help me Need somebody to tell me Please, please What the hell is going on Crazy I might be going crazy Need someone here to tell me Please, please How do I get along? Don't be afraid We're the ones who'll help you find the way So much to say But don't be here to stay Save me Someone please come and save me Need somebody to tell me Please, please Just how to move on Crazy I might be going crazy Need someone here to tell me Please, please How do I get along? Help me, help me


Friday 2021-01-08 12:18:54 by IsabellaShi0124

Add files via upload

Jan 8th 18:30 - 20:20 Just a side note, I just realized that the question I worked on for the previous session was #5, not #6. For this session, I completed three problems. #6 went fairly smoothly except I spent 10 minutes trying to figure out why my function kept giving me a result of 1. It turned out that I indented too much for the return statement. Stupid mistake, but important to remember the lesson. Understanding the biological terms in #7 took some time. The logic and math of #7 was kind of complicated, so I drew some graphs to help myself understand. Once the math worked out, it became easy to complete. For #8, I went back to lab 2 for some inspiration. Typing the codon translation took very long.


Friday 2021-01-08 12:29:09 by Mario Jorge Pereira

Update 2018-01-12-is-intelligence-enough.md


layout: post title: "External Featured Image" author: sal categories: [ Jekyll, tutorial, web development ] image: "https://images.unsplash.com/photo-1541544537156-7627a7a4aa1c?ixlib=rb-0.3.5&ixid=eyJhcHBfaWQiOjEyMDd9&s=a20c472bc23308e390c8ffae3dd90c60&auto=format&fit=crop&w=750&q=80"

Education must also train one for quick, resolute and effective thinking. To think incisively and to think for one's self is very difficult.

We are prone to let our mental life become invaded by legions of half truths, prejudices, and propaganda. At this point, I often wonder whether or not education is fulfilling its purpose. A great majority of the so-called educated people do not think logically and scientifically.

Even the press, the classroom, the platform, and the pulpit in many instances do not give us objective and unbiased truths. To save man from the morass of propaganda, in my opinion, is one of the chief aims of education. Education must enable one to sift and weigh evidence, to discern the true from the false, the real from the unreal, and the facts from the fiction.

The function of education, therefore, is to teach one to think intensively and to think critically. But education which stops with efficiency may prove the greatest menace to society. The most dangerous criminal may be the man gifted with reason, but with no morals.

The late Eugene Talmadge, in my opinion, possessed one of the better minds of Georgia, or even America. Moreover, he wore the Phi Beta Kappa key. By all measuring rods, Mr. Talmadge could think critically and intensively; yet he contends that I am an inferior being. Are those the types of men we call educated?

We must remember that intelligence is not enough. Intelligence plus character--that is the goal of true education. The complete education gives one not only power of concentration, but worthy objectives upon which to concentrate. The broad education will, therefore, transmit to one not only the accumulated knowledge of the race but also the accumulated experience of social living.


Friday 2021-01-08 12:36:05 by Mario Jorge Pereira

Update and rename 2018-01-12-never-stopped-worrying-never-loved-bomb.md to 2020-01-07-trabalhe-4-horas-por-semana-.md

I’ve been through fire and water, I tell you! From my earliest pebblehood the wildest things you could imagine have been happening to this world of ours, and I have been right in the midst of them.

So begins Hallam Hawksworth’s The Strange Adventures of a Pebble. Written in the 1920s, the book was part of a series which also included The Adventures of a Grain of Dust and A Year in the Wonderland of Trees, all of which were supposed to introduce children to the world of Natural Sciences. In each of them, Hawksworth personifies the natural object he is exploring, and using a mixture of folk tales, scientific facts and colloquial, friendly explanations guides the reader through the history of the natural world. It’s a real thrill of a ride, dramatizing the life cycle of supposedly dull things. The Adventures of a Grain of Dust begins even more loudly than Pebble:

I don’t want you to think that I’m boasting, but I do believe I’m one of the greatest travellers that ever was; and if anybody, living or dead, has ever gone through with more than I have I’d like to hear about it. Hallam Hawksworth was the pen-name of teacher Francis Blake Atkinson. He was married to the author Eleanor Stackhouse Atkinson, author of the children’s classic Greyfriars Bobby, which was based on the (supposedly) true story of a Scottish dog who spent fourteen years guarding his masters grave. The couple were both committed to education and published a weekly magazine for Chicago high school students called The Little Chronicle, as well as working for Encyclopaedia companies later in life.


Friday 2021-01-08 14:55:07 by Sejr_Yoga

Holy fucking shit lots of files edit. So this started with me trying to create a movieDetails activity. Since then lots of stuff has been edited in the different interfaces and some of their implementations.

Basicly it has changed nothing so far, but the foundation for further development has been laid.


Friday 2021-01-08 15:14:52 by rendenba

Bugfixes/Misc Improvements

Debug variables should be cheats Added allowing debugging of weapons for vampires Please for the love of god, can we be done with prediction issues with the crowbar!


Friday 2021-01-08 16:28:07 by David B

ws error: connect ECONNREFUSED 127.0.0.1:8900 (#96)

The Rust version requires access to port 8900. You'll get the error: "ws error: connect ECONNREFUSED 127.0.0.1:8900" if you run the rust version and the port forwarding is not set up. For completeness and to make the helloworld experience as error-free and troubleshoot-free as possible, I think we should add the port forwarding command to the installation instructions.


Friday 2021-01-08 16:47:07 by bors[bot]

Merge #2299

2299: Add CSR support for 64 Bit RISC-V r=hudson-ayers a=bradjc

Pull Request Overview

Adaptation of #2041.

This pull request takes only the CSR updates from #2041 so that the CSR code can be compiled for both 32 bit and 64 bit RISC-V. This is mainly just converting u32 -> usize, and then adding arch cfgs where the 32 bit and 64 bit architectures don't quite match.

I believe this is the non-controversial changes from #2041, and is a step towards supporting a 64 bit RISC-V platform. The other changes needed to fully support rv64 involve changing the low-level assembly since there is no "register size load/store" in RISC-V, and updating the PMP code. I think we need to come up with some strategy to support full rv64 without hopefully duplicating a lot of very delicate code.

Also, yes, it pains me to add so many cfg statements. However, they are all based on the architecture as specified by the RISC-V spec, and not part of the software architecture in Tock. So they should be reasonably well documented, and any confusion can be resolved by referring to the RISC-V spec.

Testing Strategy

I tried the arty-e21 board and it still works.

TODO or Help Wanted

n/a

Documentation Updated

  • Updated the relevant files in /docs, or no updates are required.

I did a grep in the docs and I didn't see anywhere we talked about CSR sizes.

Formatting

  • Ran make prepush.

Sidenote: enabling automatic rustfmt in your text editor is absolutely amazing and you should absolutely do it.

Co-authored-by: Brad Campbell [email protected] Co-authored-by: Sean Anderson [email protected]


Friday 2021-01-08 17:50:01 by Al Viro

binfmt_elf: partially sanitize PRSTATUS_SIZE and SET_PR_FPVALID

On 64bit architectures that support 32bit processes there are two possible layouts for NT_PRSTATUS note in ELF coredumps. For one thing, several fields are 64bit for native processes and 32bit for compat ones (pr_sigpend, etc.). For another, the register dump is obviously different - the size and number of registers are not going to be the same for 32bit and 64bit variants of processor.

Usually that's handled by having two structures - elf_prstatus for native layout and compat_elf_prstatus for 32bit one. 32bit processes are handled by fs/compat_binfmt_elf.c, which defines a macro called 'elf_prstatus' that expands to compat_elf_prstatus. Then it includes fs/binfmt_elf.c, which makes all references to struct elf_prstatus to be textually replaced with struct compat_elf_prstatus. Ugly and somewhat brittle, but it works.

However, amd64 is worse - there are three possible layouts. One for native 64bit processes, another for i386 (32bit) processes and yet another for x32 (32bit address space with full 64bit registers).

Both i386 and x32 processes are handled by fs/compat_binfmt_elf.c, with usual compat_binfmt_elf.c trickery. However, the layouts for i386 and x32 are not identical - they have the common beginning, but the register dump part (pr_reg) is bigger on x32. Worse, pr_reg is not the last field - it's followed by int pr_fpvalid, so that field ends up at different offsets for i386 and x32 layouts.

Fortunately, there's not much code that cares about any of that - it's all encapsulated in fill_thread_core_info(). Since x32 variant is bigger, we define compat_elf_prstatus to match that layout. That way i386 processes have enough space to fit their layout into.

Moreover, since these layouts are identical prior to pr_reg, we don't need to distinguish x32 and i386 cases when we are setting the fields prior to pr_reg.

Filling pr_reg itself is done by calling ->get() method of appropriate regset, and that method knows what layout (and size) to use.

We do need to distinguish x32 and i386 cases only for two things: setting ->pr_fpvalid (offset differs for x32 and i386) and choosing the right size for our note.

The way it's done is Not Nice, for the lack of more accurate printable description. There are two macros (PRSTATUS_SIZE and SET_PR_FPVALID), that default essentially to sizeof(struct elf_prstatus) and (S)->pr_fpvalid = 1. On x86 asm/compat.h provides its own variants.

Unfortunately, quite a few things go wrong there: * PRSTATUS_SIZE doesn't use the normal test for process being an x32 one; it compares the size reported by regset with the size of pr_reg. * it hardcodes the sizes of x32 and i386 variants (296 and 144 resp.), so if some change in includes leads to asm/compat.h pulled in by fs/binfmt_elf.c we are in trouble - it will end up using the size of x32 variant for 64bit processes. * it's in the wrong place; asm/compat.h couldn't define the structure for i386 layout, since it lacks quite a few types needed for it. Hardcoded sizes are largely due to that.

The proper fix would be to have an explicitly defined i386 variant of structure and have PRSTATUS_SIZE/SET_PR_FPVALID check for TIF_X32 to choose the variant that should be used. Unfortunately, that requires some manipulations of headers; we'll do that later in the series, but for now let's go with the minimal variant - rename PRSTATUS_SIZE in asm/compat.h to COMPAT_PRSTATUS_SIZE, have fs/compat_binfmt_elf.c define PRSTATUS_SIZE to COMPAT_PRSTATUS_SIZE and use the normal TIF_X32 check in that macro. The size of i386 variant is kept hardcoded for now. Similar story for SET_PR_FPVALID.

Signed-off-by: Al Viro [email protected]


Friday 2021-01-08 18:30:26 by christie

Day 20 - Awful code 🙈 but so proud 😊

Okay this code is awful and no one should look at it ever BUT you should have seen how happy I was when this finally worked. I was working on this from day 20 until christmas eve and when it finally worked it basically made my year. I had 33 (I kid you not, THIRTY THREE) different versions of this sitting around. In the end it turned out I made a silly mistake in my understanding of what rotatoin would do and it's kind of amazing I got part 1 at all - once I figured that out (SO MANY HOURS LATER) I was finally able to get it to work! Now that I know that, I kinda hope I'll go back one day and remove like 90% of this code, but who are we kidding, it'll be 2021 advent of code before we know it :D


Friday 2021-01-08 20:27:05 by Filip Hracek

Avoid hateTowards bug

When there was Confusion in any actor’s future, it meant that the 1000 score totally skewed all results.

The issue was that hateTowards was calculated from the perspective of the future actor. For example, if there was a future in which a goblin gets confused, suddenly that consequence means the enemy team score jumps up by thousands (because suddenly all living friends are hated-towards by a 1000 intensity.

This is fixed by using the initial actor (confused or not) to score the future. This makes more sense even by intuition. We don’t value future possibilities by how our future selves will like them but how we like them today. Our future self might be insane and love complete solitude, but that shouldn’t make us motivated to get insane and into the wilderness.


Friday 2021-01-08 21:30:32 by Vince

Add files via upload

This is the first installment of the love-seo page that will live on the server for https://schlezes.com/love-seo.

There are two reasons for this repository:

  1. I need to update the (very old) wordpress-seo page that was published in year 2014 which is now not useful in the WordPress framework.

    In addition,

    with my new design of HTTPS:/schlezes.com which is HTML and JavaScript design only, I need to update my viewpoint of how SEO works 7 years later.

  2. With this repository, I gain additional experience with revision control for the JavaScript Experience.


Friday 2021-01-08 21:52:11 by KingOKarma

Developer (#189)

  • added a whole bunch of shit

  • fixed some stuff and added some stuuf lel

  • yeah

  • removed some stupid ideas and fixed the home page

  • updated alot if stuff like slash commands. still needs work tho

  • split up the css files

  • some small changes that will be put in once rewrite is done

  • forgot to change embed names oops

  • quick embed welcome msg fix

Co-authored-by: milas [email protected] Co-authored-by: BuildTools [email protected] Co-authored-by: Milas [email protected]


Friday 2021-01-08 22:10:14 by Braden Obrzut

Overhaul CMakeLists to conform with modern CMake

  • Prefer target properties instead of setting variables whenever possible. A zmusic-obj target now exists to represent the commonality between zmusic and zmusiclite.
  • Factored out as much as possible from global settings to per target settings which will make it easier to support using ZMusic as a submodule. Moved helper functions into a ZUtility.cmake module.
  • We now generate and install ZMusicConfig.cmake so find_package(ZMusic) will work either automatically or given ZMusic_DIR is set.
  • CPack is enabled although some refinement is still needed.
  • Requires CMake >= 3.13 which is newer than I would normally like, but given how no one like to refactor these things it may be better to deal with the short term pain of going a little aggressive on the requirement in order to avoid having to make things ugly. Especially given that these scripts have a tendency to be copy/pasted into sister projects. CMake itself has very few dependencies so users of old Linux distros should be able to easily compile a supported version of CMake.
  • On Windows CMake >= 3.15 is required for redistributable results.
  • Cleaned out bits that were copied from GZDoom but not relevant to ZMusic.

Friday 2021-01-08 22:19:42 by bossotron13

Fuck you, Fuck code i did this shit in the most scuffed way as possible


Friday 2021-01-08 23:10:19 by Patric Stout

Remove: warning in cheat window

Although meant as a funny joke towards the player, our social standards have changed since 2004, and such "jokes" are no longer acceptable by the community as a whole.

It also serves absolutely no purpose, other than trying to be funny. Let's keep the jokes to funny people, so we can concentrate on a good game :)


Friday 2021-01-08 23:16:39 by Noômen Ben Hassin

Pull Request

Contributing on this pull request

Contributing to this repository

Getting started

Before you begin:

Use the 'make a contribution' button

Navigating a new codebase can be challenging, so we're making that a little easier. As you're using docs.github.com, you may come across an article that you want to make an update to. You can click on the make a contribution button right on that article, which will take you to the file in this repo where you'll make your changes.

Before you make your changes, check to see if an issue exists already for the change you want to make.

Don't see your issue? Open one

If you spot something new, open an issue using a template. We'll use the issue to have a conversation about the problem you want to fix.

Ready to make a change? Fork the repo

Fork using GitHub Desktop:

Fork using the command line:

  • Fork the repo so that you can make your changes without affecting the original project until you're ready to merge them.

Fork with GitHub Codespaces:

Make your update:

Make your changes to the file(s) you'd like to update. Here are some tips and tricks for using the docs codebase.

Open a pull request

When you're done making changes and you'd like to propose them for review, use the pull request template to open your PR (pull request).

Submit your PR & get it reviewed

  • Once you submit your PR, others from the Docs community will review it with you. The first thing you're going to want to do is a self review.
  • After that, we may have questions, check back on your PR to keep up with the conversation.
  • Did you have an issue, like a merge conflict? Check out our git tutorial on how to resolve merge conflicts and other issues.

Your PR is merged!

Congratulations! The whole GitHub community thanks you. ✨

Once your PR is merged, you will be proudly listed as a contributor in the contributor chart.

Keep contributing as you use GitHub Docs

Now that you're a part of the GitHub Docs community, you can keep participating in many ways.

Learn more about contributing:

Types of contributions 📝

You can contribute to the GitHub Docs content and site in several ways. This repo is a place to discuss and collaborate on docs.github.com! Our small, but mighty 💪 docs team is maintaining this repo, to preserve our bandwidth, off topic conversations will be closed.

📣 Discussions

Discussions are where we have conversations.

If you'd like help troubleshooting a docs PR you're working on, have a great new idea, or want to share something amazing you've learned in our docs, join us in discussions.

🪲 Issues

Issues are used to track tasks that contributors can help with. If an issue has a triage label, we haven't reviewed it yet and you shouldn't begin work on it.

If you've found something in the content or the website that should be updated, search open issues to see if someone else has reported the same thing. If it's something new, open an issue using a template. We'll use the issue to have a conversation about the problem you want to fix.

🛠️ Pull requests

A pull request is a way to suggest changes in our repository.

When we merge those changes, they should be deployed to the live site within 24 hours. 🌍 To learn more about opening a pull request in this repo, see Opening a pull request below.

❓ Support

We are a small team working hard to keep up with the documentation demands of a continuously changing product. Unfortunately, we just can't help with support questions in this repository. If you are experiencing a problem with GitHub, unrelated to our documentation, please contact GitHub Support directly. Any issues, discussions, or pull requests opened here requesting support will be given information about how to contact GitHub Support, then closed and locked.

If you're having trouble with your GitHub account, contact Support.

🌏 Translations

This website is internationalized and available in multiple languages. The source content in this repository is written in English. We integrate with an external localization platform called Crowdin and work with professional translators to localize the English content.

We do not currently accept contributions for translated content, but we hope to in the future.

⚖️ Site Policy

GitHub's site policies are published on docs.github.com, too!

If you find a typo in the site policy section, you can open a pull request to fix it. For anything else, see the CONTRIBUTING guide in the site-policy repo.

Starting with an issue

You can browse existing issues to find something that needs help!

Labels

Labels can help you find an issue you'd like to help with.

  • The help wanted label is for problems or updates that anyone in the community can start working on.
  • The good first issue label is for problems or updates we think are ideal for beginners.
  • The content label is for problems or updates in the content on docs.github.com. These will usually require some knowledge of Markdown.
  • The engineering label is for problems or updates in the docs.github.com website. These will usually require some knowledge of JavaScript/Node.js or YAML to fix.

Opening a pull request

You can use the GitHub user interface ✏️ for some small changes, like fixing a typo or updating a readme. You can also fork the repo and then clone it locally, to view changes and run your tests on your machine.

Working in the github/docs repository

Here's some information that might be helpful while working on a Docs PR:

  • Development - This short guide describes how to get this app running on your local machine.

  • Content markup reference - All of our content is written in GitHub-flavored Markdown, with some additional enhancements.

  • Content style guide for GitHub Docs - This guide covers GitHub-specific information about how we style our content and images. It also links to the resources we use for general style guidelines.

  • Reusables - We use reusables to help us keep content up to date. Instead of writing the same long string of information in several articles, we create a reusable, then call it from the individual articles.

  • Variables - We use variables the same way we use reusables. Variables are for short strings of reusable text.

  • Liquid - We use liquid helpers to create different versions of our content.

  • Scripts - The scripts directory is the home for all of the scripts you can run locally.

  • Tests - We use tests to ensure content will render correctly on the site. Tests run automatically in your PR, and sometimes it's also helpful to run them locally.

Reviewing

We (usually the docs team, but sometimes GitHub product managers, engineers, or supportocats too!) review every single PR. The purpose of reviews is to create the best content we can for people who use GitHub.

💛 Reviews are always respectful, acknowledging that everyone did the best possible job with the knowledge they had at the time.
💛 Reviews discuss content, not the person who created it.
💛 Reviews are constructive and start conversation around feedback.

Self review

You should always review your own PR first.

For content changes, make sure that you:

  • Confirm that the changes address every part of the content design plan from your issue (if there are differences, explain them).
  • Review the content for technical accuracy.
  • Review the entire pull request using the localization checklist.
  • Copy-edit the changes for grammar, spelling, and adherence to the style guide.
  • Check new or updated Liquid statements to confirm that versioning is correct.
  • Check that all of your changes render correctly in staging. Remember, that lists and tables can be tricky.
  • If there are any failing checks in your PR, troubleshoot them until they're all passing.

Pull request template

When you open a pull request, you must fill out the "Ready for review" template before we can review your PR. This template helps reviewers understand your changes and the purpose of your pull request.

Suggested changes

We may ask for changes to be made before a PR can be merged, either using suggested changes or pull request comments. You can apply suggested changes directly through the UI. You can make any other changes in your fork, then commit them to your branch.

As you update your PR and apply changes, mark each conversation as resolved.

Windows

This site can be developed on Windows, however a few potential gotchas need to be kept in mind:

  1. Regular Expressions: Windows uses \r\n for line endings, while Unix based systems use \n. Therefore when working on Regular Expressions, use \r?\n instead of \n in order to support both environments. The Node.js os.EOL property can be used to get an OS-specific end-of-line marker.
  2. Paths: Windows systems use \ for the path separator, which would be returned by path.join and others. You could use path.posix, path.posix.join etc and the slash module, if you need forward slashes - like for constructing URLs - or ensure your code works with either.
  3. Bash: Not every Windows developer has a terminal that fully supports Bash, so it's generally preferred to write scripts in JavaScript instead of Bash.

Friday 2021-01-08 23:17:46 by Marissa

Update unpackerr.conf.example

Added a #comment to lidarr and readarr bc oh my god i hate myself and spent a lil bit too much time figuring out why the program doesnt work


< 2021-01-08 >