Skip to content

Latest commit

 

History

History
1219 lines (944 loc) · 56.3 KB

2021-03-31.md

File metadata and controls

1219 lines (944 loc) · 56.3 KB

< 2021-03-31 >

4,758,513 events, 1,342,895 push events, 2,164,488 commit messages, 178,917,540 characters

Wednesday 2021-03-31 00:00:44 by pithlessly

implement closures (:confetti:)

Up to this point I tried as hard as I could to break up the process into a bunch of incremental changes. Unfortunately I didn't manage to sustain that for the whole thing, so here we are.

A summary of the changes, in roughly the order that I see them as I look over the diff now:

  • Introduce a type alias for the function pointer type of closures.

  • Change closure functions to take a pointer to the vector containing the captured variables, rather than to the array directly. This doesn't cost any indirection and allows the array to be stored as an Obj value.

  • Distinguish between "local" and "nonlocal" variables (i.e. those which are defined by the current procedure vs. by outer procedures) in the AST. Nonlocal variables are annotated, not just with how nonlocal they are, but also the ID of the procedure that they come from.

  • Don't clear variables at the end of a (let) block. This is incorrect if they still get used by any closures. I could have modified it to only clear stack-allocated variables, but I didn't bother (although I should have, now that I think about it).

  • Distinguish between (lambda)s which capture variables and those which don't. The latter continue using MAKE_PROC, the former use a new function make_closure into which we pass the environment.

  • Move the responsibility for creating separate sequences of random access indices for boxed and unboxed variables (formerly done by write_local into Expr, exposing them as fields in local_data.

  • Implement captures from further back than the immediately enclosing function. Layers which are "in between" are annotated with fwd_env, which instructs the code generator to store a pointer to the enclosing environment as the first element of their environment. This means we may introduce a number of heap allocations proportional to the depth of the nesting, but that shouldn't be a problem unless you really like writing (lambda () (lambda () (lambda () ...))) for some reason.

  • Make Expr.proc_id abstract again, because I was using it in a lot of places where I would otherwise just have a lot of int variables, and was worried that I'd mess them up. We still provide an accessor function which is just implemented as the identity.

  • You know how I said nonlocal variables are annotated with the ID of the procedure they came from? A few things to note:

    • Top-level local variables (not global variables introduced with (define), but those introduced by (let)) have an ID of -1. Yeah, sorry about that.

    • There's kind of a chicken-and-egg problem, where to add a function to gctx you need to pass over its body first, but you need to have added it already in order to annotate the AST with its ID in case the body contains lambdas that capture its variables. We solve this by first adding a dummy function with an empty body to gctx, and then replacing the body once we've created it out of the parse tree.

      This has the disadvantage that procedures no longer always have a higher ID than the procedures they construct, which leads to forward references in the generated C code. To fix this, we forward declare all procedures to be safe.

  • Remove the (box _) form. A variable being captured by lambdas automatically marks it as needing to be boxed.

Unfortunately, these changes come at the cost of making the code a bit messier. There are already a lot of data structures in Expr that store subtly different subsets of essentially the same data, but now we're leaning on them even more heavily.


Wednesday 2021-03-31 00:09:18 by AustinR

Fixed my damn contact form the way I want it that's really all I remember doing at this point thanks a lot input fucking boxes the end


Wednesday 2021-03-31 01:54:32 by Mike

Create youtube-2024-experience.txt

As a part of recent changes over at YouTube - I have tried to simulate an experience of using YouTube in the year 2024! Any unnecessary elements such as interaction, basic statistics and other have been completely hidden! Content creators will absolutely love this!


Wednesday 2021-03-31 02:24:48 by Jean-Paul R. Soucy

New data: 2021-03-30. See data notes.

Revise historical data: cases (AB, BC, NB, NS, ON, SK); mortality (ON).

Note regarding deaths added in QC today: “7 new deaths, for a total of 10,658 deaths: 1 death in the last 24 hours, 6 deaths between March 23 and March 28.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.

Recent changes:

2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:

  1. Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
  2. Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
  3. The file codebook.csv has been moved to the directory “other”.

We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.

  • 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.

Vaccine datasets:

  • 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
  • 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
  • 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
  • 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.

https://www.quebec.ca/en/health/health-issues/a-z/2019-coronavirus/situation-coronavirus-in-quebec/#c47900

Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.

SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.

For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.


Wednesday 2021-03-31 03:44:50 by ezio84

Fix 2tap2wake after Ambient Pulsing on some devices

like taimen and walleye, instead sunfish (and probably newer pixels) doesn't need this

To apply, override the config_has_weird_dt_sensor bool in the device tree

Change-Id: I8a82b34163d2cf37b81eeb0727aa95e80961fd57


TL;DR for some reason, on taimen and walleye, after ambient pulsing gets triggered by adb with the official "com.android.systemui.doze.pulse" intent or by our custom "wake to ambient" features, the double tap sensor dies if you follow this steps:

  • screen is OFF
  • trigger ambient pulsing with a double tap to wake (if custom wake to ambient feature is enabled), or the official intent by adb, or with music ticker or any other event
  • after ambient display shows up, don't touch anything and wait till the screen goes OFF again
  • double tap to wake, again
  • the double tap sensor doesn't work at all and device doesn't wake up

Now, funny thing, after the steps above, if you cover then uncover the proximity/brightness sensor with the hand, then double tap to wake again, the wake gesture works as expected.

When covering/uncovering the proximity/brightness sensor, this happens: 11-10 22:02:00.916 967 998 I ASH : @ 1993.460: ftm4_disable_sensor: disabling sensor [double-tap] 11-10 22:02:02.013 967 998 I ASH : @ 1994.556: ftm4_enable_sensor: enabling sensor [double-tap]

When you switch screen ON with power button, the doze screen states do the same: the sensor gets disabled then enabled again if device goes to DOZE idle state.

Instead, after Ambient pulsing, when the pulsing finishes, the sensor is still enabled, so the disable/enable event doesn't happen this time. And that's why, for some reason, it doesn't respond anymore.

So, in a nutshell: i've no idea why this sh#t happens lol, but with a super lazy hacky tricky dirty bloody nooby line change, we can force the sensor disable/enable event when the device goes to DOZE state.

Change-Id: I8ce463a6e435e540e3ca93336c5dba7a95771b56 Signed-off-by: Jackeagle [email protected]


Wednesday 2021-03-31 05:45:16 by Justin Gottula

#60: Fix Makefile issues arising from IA32/AMD64 arch assumption

This Makefile is frankly an abomination. It's still originally based on the crusty ugly Makefile provided with the demo programs in the ASK SDK.

The necessity to manually adjustment of the PLATFORM variable in the Makefile is idiotic and really should not need to be there, except maybe in cases of cross-compilation.


Wednesday 2021-03-31 05:45:25 by Crypt Keeper

Migrates e2e tests from ginkgo,gomega to testing,testify (#130)

To call ginkgo,gomega indirect would be putting things lightly. A combination of an indirect testing library and long-running tests are a recipe for frustration. I've begun a process to remove ginkgo,gomega with a couple tests that I had been looking at.

See #127

Some benefits

  • Before, we both had tests using normal testing,testify and also ginkgo,gomega. Now we only have way to write tests: testing,testify.
  • testing,testify are more popular and easier to learn than ginkgo,gomega. testing is the standard library described in all golang books. testify is a simple assertion library 10x more popular than gomega in terms of star count. Neither require learning BDD concepts.
  • Before, our running of e2e tests was significantly different than our other non-gingko tests in the same project. It literally required compilation of a platform-specific binary named 'e2e', adding a lot of complexity to CI and local setup.
  • Before, individual tests cannot be executed via normal means, such as Goland. Instead, you have to run the suite file. Now, you can execute and more importantly debug individual tests instead of via a suite wrapper.
  • Before, writing tests not only required understanding of the subject, but also the ginkgo runtime. Otherwise, statements like var _ = Describe("getenvoy extension init", func() { seem pure magic.
  • Before, the nature of test execution required a combination of understanding the go platform and also ginkgo
  • Before, updating Ginkgo/gomega would skew the dependency tree, due to their dependencies on two protobuf libraries. Now, the test runtime does not interfere with main code.
  • Before, the same test took more lines of code to achieve the same table test. Less code is less to maintain.
  • Some other notes about migration off are captured in this issue openservicemesh/osm#1704

Notable differences

Gomega had a different way of logging. For example instead of using a logging library, it would use statements like this: By("changing to the output directory") which translate into output STEP: changing to the output directory. This is mainly to satisfy BDD idioms and is meant to be interpreted amongst the many lines of normal log output. For cases where the points inside the test should be logged, we should use normal logging instead. However, I don't recommend making log statements unless they are meaningful and clarify progress.

Also, those not used to it will notice use of normal go defer commands instead of the before/after test hooks used in tools like gomega. This can take getting used to, but it is also the same as normal go code, so shouldn't be time lost. There are some cases where multiple tear-down steps are needed and those steps are more visible now. That's not necessarily bad because these typically are also the most complex tests and have a lot going on. Seeing the various setup requirements shows problem areas that help knowing when debugging.

Signed-off-by: Adrian Cole [email protected]


Wednesday 2021-03-31 06:21:31 by Volsungr

Pandaria & Islands Corrected 2.0

Re-re-upload of intended changes. Sorry, Git is being a pain in the ass today for absolutely no reason that I've been able to identify. Everything looks good this time around.


Wednesday 2021-03-31 06:36:54 by Justin Gottula

#61: Rectify -Waddress-of-packed-member warnings from gcc

Whoever invented the SER header was an idiot and decided that all the 4- and 8-byte fields should be 2-byte-aligned. Do these people understand how computers work? Do they think memory buses are magic? Sigh...

This solution to the problem is nifty because it uses std::tie stuff. But ultimately it ended up dumber than intended, because it didn't end up being possible to do std::tie directly to the struct members for exactly the same reason that taking pointers to them generated warnings: they aren't aligned. So the troublesome part ended up just needing an intermediate temporary assignment anyway, and the niftiness factor was sadly substantially decreased.


Wednesday 2021-03-31 13:12:01 by Waffuru77

fuck you

i made the logo on the website BUT JUST FOR NOW I KNOW WERE GONNA USE CSS FOR THAT SHIT YUH


Wednesday 2021-03-31 14:21:51 by AngusRed

Update 2021-03-31-Joining-Hack-South.md

Added some more colours. Damn i love colours


Wednesday 2021-03-31 15:06:06 by HiTechCharles

breakfast lunch and dinner colums are ready. instead of add buttons to add items, you type in an item and press enter.


Wednesday 2021-03-31 15:11:04 by DooberJaz

Huge quality of life improvements and adder/moores machine!

This took SO long to get out there due to numerous PC troubles including my entire D drive packing up (thank god I had this github repo to redownload from)


Wednesday 2021-03-31 15:23:29 by HARDIntegral

put main, in its own folder, and I FUCKING LOVE MAKE


Wednesday 2021-03-31 15:37:12 by NothinButJohn

we are breaking up for awhile. Ill see you again my love. Until then... </3. I broke you a lot but you became stronger from it. We built each other up even more. No matter what they tell you, it isnt you - its me. A stronger dev couldve made you even better and given you the life in the datosphere I could only dream of for you:*). I remember when you were just a messaging app, then you loaded your first TSLA stock data. Then you were loading thoughts. It was beautiful to witness your growth. I nearly had file upload working too. In the end we both knew I'd have to put time into other projects:( you'll have a special place on disk with me and Asuna-Sakura. I hate this part the most... Goodbye for now


Wednesday 2021-03-31 17:01:50 by Marko Grdinić

"1:50pm. Done with breakfast. Let me play around with UIs. This is not like editor support where I had a concrete idea that I wanted to bring to life. Right now, I just need to get familiar with charts and tables.

Please see the examples/widgets/recycleview/basic_data.py file for a more complete example.

Where is this thing? Let me locate it.

https://github.com/kivy/kivy/blob/master/examples/widgets/recycleview/basic_data.py

Google was smarter than I thought it would be.

2pm. Focus me focus.

The above example is not too horrible. Just think of how great it will be when I am killing people with robots, and I will get motivated. I can figure out how to use tables. I just need to put in the effort to do it.

2:05pm.

<Test>:
    canvas:
        Color:
            rgba: 0.3, 0.3, 0.3, 1
        Rectangle:
            size: self.size
            pos: self.pos
    rv: rv

Hmmm, rv is not a property here, and yet the set still works.

self.rv.data[0]['name.text'] = value or 'default new value'

I did not know that Python's logical operators can be used like this.

scroll_wheel_distance: dp(114)

What is this?

https://kivy.org/doc/stable/api-kivy.uix.scrollview.html

Distance to move when scrolling with a mouse wheel. It is advisable that you base this value on the dpi of your target device’s screen.

Ah, I see.

    def populate(self):
        self.rv.data = [
            {'name.text': ''.join(sample(ascii_lowercase, 6)),
             'value': str(randint(0, 2000))}
            for x in range(50000)]

The thing works. It is fast even with 50k widgets.

<Row@RecycleKVIDsDataViewBehavior+BoxLayout>:
    canvas.before:
        Color:
            rgba: 0.5, 0.5, 0.5, 1
        Rectangle:
            size: self.size
            pos: self.pos
    value: ''
    Label:
        id: index
        font_name: 'RobotoMono-Regular'
    Label:
        id: name
        font_name: 'RobotoMono-Regular'
    Label:
        text: root.value
        font_name: 'RobotoMono-Regular'

Let me try adding another field to this.

    def populate(self):
        self.rv.data = [
            {
            'index.text': str(x),
            'name.text': ''.join(sample(ascii_lowercase, 6)),
            'value': str(randint(0, 2000))
            }
            for x in range(50)]

Ah, I see. This is pretty interesting.

Just index does not work. It has to be index.text in order to set the text of the label.

This is definitely using the dynamism of the Python language to its fullest. This would definitely be a pain in the ass in a statically typed language.

    Label:
        canvas.before:
            Color:
                rgba: 1,0,0,0.3
            Rectangle:
                size: self.size
                pos: self.pos
        id: index
        font_name: 'RobotoMono-Regular'
        size_hint_x: None
        width: dp(30)
        text_size: self.size
        halign: 'right'

Ok, I get how to align the text so that the ids come first. All is good.

<Row@RecycleKVIDsDataViewBehavior+BoxLayout>:

Now what is RecycleKVIDsDataViewBehavior?

I think the example in the folder is really good. I certanly have everything I need if is only displaying the replay buffer. But reacting to clicks is something I'd like to figure out as well.

https://kivy.org/doc/stable/api-kivy.uix.recycleview.views.html?highlight=recyclekvidsdataviewbehavior#kivy.uix.recycleview.views.RecycleKVIDsDataViewBehavior

Then setting the data list with rv.data = [{'name.text': 'Kivy user', 'value.text': '12'}] would automatically set the corresponding labels.

So, if the key doesn’t have a period, the named property of the root widget will be set to the corresponding value. If there is a period, the named property of the widget with the id listed before the period will be set to the corresponding value.

Yeah, this is the right way to do a DSL. I am struck with admiration at how they managed this.

4pm. I got caught up in chores. I thought I would not have to do them at all today, but I ended up having to help around the house.

Let me get back on track.

4:15pm.

from random import sample, randint
from string import ascii_lowercase

from kivy.app import App
from kivy.lang import Builder
from kivy.uix.boxlayout import BoxLayout

kv = """
<Row@RecycleKVIDsDataViewBehavior+BoxLayout>:
    canvas.before:
        Color:
            rgba: 0.5, 0.5, 0.5, 1
        Rectangle:
            size: self.size
            pos: self.pos
    value: ''
    Label:
        canvas.before:
            Color:
                rgba: 1,0,0,0.3
            Rectangle:
                size: self.size
                pos: self.pos
        id: index
        font_name: 'RobotoMono-Regular'
        size_hint_x: None
        width: dp(30)
        text_size: self.size
        halign: 'right'
    Label:
        id: name
        font_name: 'RobotoMono-Regular'
    Label:
        text: root.value
        font_name: 'RobotoMono-Regular'

<Test>:
    canvas:
        Color:
            rgba: 0.3, 0.3, 0.3, 1
        Rectangle:
            size: self.size
            pos: self.pos
    rv: rv
    orientation: 'vertical'
    GridLayout:
        cols: 3
        rows: 2
        size_hint_y: None
        height: dp(108)
        padding: dp(8)
        spacing: dp(16)
        Button:
            text: 'Populate list'
            on_press: root.populate()
        Button:
            text: 'Sort list'
            on_press: root.sort()
        Button:
            text: 'Clear list'
            on_press: root.clear()
        BoxLayout:
            spacing: dp(8)
            Button:
                text: 'Insert new item'
                on_press: root.insert(new_item_input.text)
            TextInput:
                id: new_item_input
                size_hint_x: 0.6
                hint_text: 'value'
                padding: dp(10), dp(10), 0, 0
        BoxLayout:
            spacing: dp(8)
            Button:
                text: 'Update first item'
                on_press: root.update(update_item_input.text)
            TextInput:
                id: update_item_input
                size_hint_x: 0.6
                hint_text: 'new value'
                padding: dp(10), dp(10), 0, 0
        Button:
            text: 'Remove first item'
            on_press: root.remove()

    RecycleView:
        id: rv
        scroll_type: ['bars', 'content']
        scroll_wheel_distance: dp(114)
        bar_width: dp(10)
        viewclass: 'Row'
        RecycleBoxLayout:
            default_size: None, dp(26)
            default_size_hint: 1, None
            size_hint_y: None
            height: self.minimum_height
            orientation: 'vertical'
            spacing: dp(2)
"""

Builder.load_string(kv)

class Test(BoxLayout):

    def populate(self):
        self.rv.data = [
            {
            'index.text': str(x),
            'name.text': ''.join(sample(ascii_lowercase, 6)),
            'value': str(randint(0, 2000))
            }
            for x in range(50)]

    def sort(self):
        self.rv.data = sorted(self.rv.data, key=lambda x: x['name.text'])

    def clear(self):
        self.rv.data = []

    def insert(self, value):
        self.rv.data.insert(0, {
            'index': str(len(self.rv.data)),
            'name.text': value or 'default value',
            'value': 'unknown'}
            )

    def update(self, value):
        if self.rv.data:
            self.rv.data[0]['name.text'] = value or 'default new value'
            self.rv.refresh_from_data()

    def remove(self):
        if self.rv.data:
            self.rv.data.pop(0)

class TestApp(App):
    def build(self):
        return Test()

if __name__ == '__main__':
    TestApp().run()

This is the example I've been studying and I get everything here.

Let me take a look at the docs again.

https://github.com/kivy/kivy/tree/master/examples/widgets/recycleview

There are also more examples in the folder here.

Hmmm...refresh_from_data is not even documented. Actually, it is, just in a different place.

https://kivy.org/doc/stable/api-kivy.uix.boxlayout.html?highlight=minimum_height#kivy.uix.boxlayout.BoxLayout.minimum_height

Automatically computed minimum height needed to contain all children.

Hmmm...

class SelectableLabel(RecycleDataViewBehavior, Label):
    ''' Add selection support to the Label '''
    index = None
    selected = BooleanProperty(False)
    selectable = BooleanProperty(True)

    def refresh_view_attrs(self, rv, index, data):
        ''' Catch and handle the view changes '''
        self.index = index
        return super(SelectableLabel, self).refresh_view_attrs(
            rv, index, data)

That index = None should be global. Then what does self.index = index do? It should be setting the local object value.

Hmmmm...

class SelectableLabel(RecycleDataViewBehavior, Label):
    ''' Add selection support to the Label '''
    index = None
    selected = BooleanProperty(False)

Why does changing this to true not cause all of them to become selected?

4:45pm. Why does it remove them all twice at the start?

    def on_touch_down(self, touch):
        ''' Add selection on touch down '''
        if super(SelectableLabel, self).on_touch_down(touch):
            return True
        print("In on_touch_down")
        if self.collide_point(*touch.pos) and self.selectable:
            return self.parent.select_with_touch(self.index, touch)

It has nothing to do with this. That print never gets triggered.

class SelectableRecycleBoxLayout(FocusBehavior, LayoutSelectionBehavior,
                                 RecycleBoxLayout):
    ''' Adds selection and focus behaviour to the view. '''

That second one is responsible for selection behavior. I have no idea what focus behavior is supposed to be.

https://kivy.org/doc/stable/api-kivy.uix.behaviors.compoundselection.html?highlight=multiselect#kivy.uix.behaviors.compoundselection.CompoundSelectionBehavior.touch_multiselect

It says that if this is false, I need to press Ctrl in order to get multiselect to work, but it does not work for me.

    def refresh_view_attrs(self, rv, index, data):
        ''' Catch and handle the view changes '''
        self.index = index
        return super(SelectableLabel, self).refresh_view_attrs(
            rv, index, data)

Instead of adding a specific index, I could have used this last time.

5:05pm. Focus me, close /a/.

class SelectableLabel(RecycleDataViewBehavior, Label):
    ''' Add selection support to the Label '''
    index = None
    selected = BooleanProperty(False)
    selectable = BooleanProperty(True)

    def refresh_view_attrs(self, rv, index, data):
        ''' Catch and handle the view changes '''
        self.index = index
        return super(SelectableLabel, self).refresh_view_attrs(
            rv, index, data)

    def on_touch_down(self, touch):
        ''' Add selection on touch down '''
        if super(SelectableLabel, self).on_touch_down(touch):
            return True
        if self.collide_point(*touch.pos) and self.selectable:
            return self.parent.select_with_touch(self.index, touch)

    def apply_selection(self, rv, index, is_selected):
        ''' Respond to the selection of items in the view. '''
        self.selected = is_selected
        if is_selected:
            print("selection changed to {0}".format(rv.data[index]))
        else:
            print("selection removed for {0}".format(rv.data[index]))

This seems complicated, and I could not have written it myself, but it is pretty simple. I can adapt this for any kind of touch situation. The only thing I am bothered about is why those removes are happening at the beginning.

    def refresh_view_attrs(self, rv, index, data):
        ''' Catch and handle the view changes '''
        self.index = index
        print(data)

What is the data here?

...
{'text': '22'}
{'text': '23'}

Ah, it is literally the data. Ok.

5:15pm. Ok, I get this. Let me just take a look at the other examples in the folder.

https://kivy.org/doc/stable/api-kivy.uix.checkbox.html

Oh, Kivy does have a checkbox. I see.

https://kivy.org/doc/stable/api-kivy.graphics.vertex_instructions.html#kivy.graphics.vertex_instructions.RoundedRectangle

Oh, Kivy does have a rounded rectangle. I could have used this instead of the regular one. Nice.

5:50pm. Had to take a break. Let me finally check out the charts.

https://stackoverflow.com/questions/36458283/kivy-for-data-visualisation

https://github.com/kivy-garden/graph https://github.com/kivy-garden/garden.matplotlib

How do I install these in Anaconda?

...Lunch time.

6:20pm. I am back. Can I try out the graph here.

pip install kivy_garden.graph

Do I use this command even though I am using an Anaconda env?

Hmmmm...

...Let me try it.

from math import sin
from kivy.app import runTouchApp
from kivy_garden.graph import Graph, MeshLinePlot
graph = Graph(xlabel='X', ylabel='Y', x_ticks_minor=5,
    x_ticks_major=25, y_ticks_major=1,
    y_grid_label=True, x_grid_label=True, padding=5,
    x_grid=True, y_grid=True, xmin=-0, xmax=100, ymin=-1, ymax=1)
plot = MeshLinePlot(color=[1, 0, 0, 1])
plot.points = [(x, sin(x / 10.)) for x in range(0, 101)]
graph.add_plot(plot)
runTouchApp(graph)

Not bad. This works now.

https://kivy-garden.github.io/graph/flower.html

Thankfully it is well documented.

I'll be able to make use of this. Right now I do not feel like messing with this anymore, but tomorrow I'll to do a little example with the graph.

Actually, let me do one more thing for the day. Let me make use the graph in Kivy language.

root = Builder.load_string('''
#:import graph kivy_garden.graph
Graph:
    xlabel:'X'
    ylabel:'Y'
    x_ticks_minor:5
    x_ticks_major:25
    y_ticks_major:1
    y_grid_label:True
    x_grid_label:True
    padding:5
    x_grid:True
    y_grid:True
    xmin:-0
    xmax:100
    ymin:-1
    ymax:1
''')

Oh, this works. Let me try adding the plot.

root = Builder.load_string('''
#:import graph kivy_garden.graph
#:import MeshLinePlot kivy_garden.graph.MeshLinePlot
#:import math math
Graph:
    xlabel:'X'
    ylabel:'Y'
    x_ticks_minor:5
    x_ticks_major:25
    y_ticks_major:1
    y_grid_label:True
    x_grid_label:True
    padding:5
    x_grid:True
    y_grid:True
    xmin:-0
    xmax:100
    ymin:-1
    ymax:1
    MeshLinePlot:
        color: 1,0,0,1
        points: [(x, math.sin(x / 10.)) for x in range(0, 101)]
''')

I do not know how to make this work.

qwe: MeshLinePlot()

Setting a fake property like this does confirm that the MeshLinePlot gets imported.

root = Builder.load_string('''
#:import graph kivy_garden.graph
#:import MeshLinePlot kivy_garden.graph.MeshLinePlot
#:import math math
Graph:
    xlabel:'X'
    ylabel:'Y'
    x_ticks_minor:5
    x_ticks_major:25
    y_ticks_major:1
    y_grid_label:True
    x_grid_label:True
    padding:5
    x_grid:True
    y_grid:True
    xmin:-0
    xmax:100
    ymin:-1
    ymax:1
    ps:
        [
        MeshLinePlot(color=[1,0,0,1], points=[(x, math.sin(x / 10.)) for x in range(0, 101)])
        ]
''')

for plot in root.ps: root.add_plot(plot)
runTouchApp(root)

I have no idea why this does not allow me to set the plot directly, but whatever. This will do for now.

6:55pm. Let me stop here for the day. Today was light all things considered. The hardest part was that morning rant.

Tomorrow I will start work on generating the enumerative replay buffer. That won't be too hard. The stage is set. All I need to do is keep moving forward."


Wednesday 2021-03-31 17:11:56 by Allan Callaghan

Paracutes now open and slow decent Paracues can be shot off Troopers die graphic Troopers falling can crush other troopers

This almost feels like a working base for a game! Next up ... bombs! .. and then score .. and then tweaks .. and .. holy shit! .. it's real! Don't like widescreen for difficulty


Wednesday 2021-03-31 17:35:38 by thestubborn

single clothing item added to loadout to match a uniform (#4496)

  • single clothing item added to loadout to match a uniform, as well as some sprites for future plans

i live in a cum

  • FUCK IT, I'LL ADD THEM ALL

I don't give a FLYING FUCK, that bitch can fuck off, I've divorced her ass three hours ago! I'm SO SICK, my body is doing THINGS - THAT THING! And you over there, SHUT UP. And you, take off my pants! YOU WANNA SEE SOME - WEIRD SHIT?

  • i had some deadspace and it was annoying me

irtujrfffucking hate dogborgs

  • i mixed something up

/obj/item/clothing/suit/yakuza not majimcoat


Wednesday 2021-03-31 19:29:37 by Anand Patil

Create apatil

I am signing because, as far as I know, Stallman is accused of: 1. having a bed in his office, 2. being despondent at one point and wrongly attempting to pressure a woman into dating him, 3. having unpopular opinions, which he has arrived at in good faith and with good will, and which he has been willing to reconsider as he has learned new things. I am signing despite the risk to my personal reputation and career because I want the world to be safe for awkward free thinkers who occasionally become despondent and do the wrong thing. Also, no one alive today knows what the final endpoint of human morality will look like, when the arc of history finishes bending toward justice. Our history up to now should teach us humility. It's overwhelmingly likely that even the most progressive among us have thoughts and behaviors that our descendants will recognize as backward or barbaric, and that there exist oppressed groups to whose plight we are blind. People like Stallman, who are humane, principled and brave enough to point out reasonable objections to overwhelmingly popular norms and taboos, help us all get better more quickly.


Wednesday 2021-03-31 20:42:40 by GickerLDS

Added the psionicist base class. There is currently no premade build, but that will come soon. Added the following new psionic powers: Broker, Call to Mind, Catfall, Crystal Shard, Deceleration, Demoralize, Energy Ray, Force Screen, Fortify, Inertial Armor, Inevitable Strike, Mind Thrust, Defensive precognition, Offensive Precognition, Offensive Prescience, Slumber, Vigor, Bestow Power, Biofeedback, Body equilibriam, Breach, Concealing Amorpha, Concussion Blast, Detect Hosilte Intent, Elfsight, Energy Adaptation, Energy Push, Energy Stun, Inflict Pain, Mental Disruption, Psychic Bodyguard, Recall Agony, Swarm of Crystals, Thought Shield, Body Adjustment, Concussive Onslaught, Endorphin Surge, Energy Burst, Energy Retort, Eradicate Invisibility, Heightened Vision, Mental Barrier, Mind Trap, Psionic Blast, Sharpened Edge, Ubiquitus Vison, Deadly Fear, Death Urge, Empathic Feedback, Energy Adaptation, Incite Passion, Intellect Fortress, Moment of Terror, Power Leech, Slip the Bonds, Wither, Wall of Ectoplasm, Ectoplasmic Shambler, Pierce Veil, Planar Travel, Power Resistance, Psychic Crush, Psychoportation, Shatter Mind Blank, Shrapnel Burst, Tower of Iron Will, UpHeaval, Breath of the Black Dragon, Brutalize Wounds, Disintigration, Sustained Flight, Barred Mind, Cosmic Awareness, Energy Conversion, Evade Burst, Oak Body, Psychosis, Ultrablast, Body of Iron, Recall Death, Shadow Body, True Metabolism, Assimilate Added the following the psionic feats and class abilities: Aligned Attack Good, Aligned Attack Evil, Aligned Attack Chaos, Aligned Attack Law, Combat Manifestation, Critical Focus, Elemental Focus Fire, Elemental Focus Acid, Elemental Focus Sound, Elemental Focus Electricity, Elemental Focus Cold, Power Penetration, Greater Power Penetration, Quick Mind, Psionic Recovery, Proficient Psionicist, Enhanced Power Damage, Expanded Knowledge, Psionic Endowment, Psionic Focus, Empowered Psionics, Epic Power Penetration, Breach Power Resistance, Double Manifest, Perpetual Foresight Added a new field for the staff 'set' command: psionicist Added alchemy mem times as a player variable to the staff 'cedit' command. Increased Wizard and Sorcerer Hit Die from d4 to d6. Added 4th circle wizard/sorcerer spell: lesser missile storm. Fixed a bug where arcane spell damage variable in cedit wasn't saving properly. Enhanced spell damage feat can now be taken up to 3 times. Added the Empowered magic feat, which can be taken up to 3 times. Updated the shifter-form series of feats so their help files are a little more clear on the forms they unlock. Added new feat category to the study command: psionic.


Wednesday 2021-03-31 21:12:14 by Jeremy Steward

Remove owned / borrowed Context object from pipelines

The Context object in this scenario was held to try and designate some kind of lifetime relationship between pipelines and the Context itself. Unfortunately, this leads to some very wonky semantics with Sending pipelines across threads.

This is especially bad if you're encountering an async executor, which may await on one thread and return on another. Arc<Context> won't be Send because Context isn't Send. And holding a reference is just as bad because then the Pipeline and Context would have to be passed together (and owned). Since multiple pipelines can exist per context, this is just frustrating.

At a lower level the context objects used in pipelines are held as std::shared_ptr<T>. It is unclear if this is maintained in any sane way with regards to the whole program, since getting a new context seems to effectively call std::shared_ptr<Context>::get(), which seems incredibly unsafe.

For now we will instead relax the restriction on pipelines holding Context types, and revisit in the future if there is a compelling reason to do so.

One alternative may be that holding onto Arc<Mutex<Context>> would make sense, but then we have to commit to a specific Mutex type, unless we make our pipeline types even more complex by templatizing them. I'd rather not force this choice onto the user since in most cases they will probably just get confused for very little benefit.


Wednesday 2021-03-31 22:19:29 by Lucy Wyman

(maint) Address puppetlabs_spec_helper deprecation warning

The puppetlabs_spec_helper ruby library we use as part of testing embedded Bolt modules was raising a deprecation warning that the default mocking framework was mocha but that to silence the warning the tests themselves should specify which framework was being used. While I'd honestly love to switch to rspec it's a lot of work to do so for relatively little gain. For now setting the configuration to mocha to silence the warning is easiest.

!no-release-note


Wednesday 2021-03-31 22:40:19 by fintzd

chore: correct minor typos in code and usage clarification (#2438)

  • Updated README.md - Edit and clarify Usage section

Two parts to the edit:

  1. Simple text edit: capitalisation of "Script: Run With Profile" to fit the used design.

  2. Clarifications to the explanation on how the default language can be changed. I had trouble understanding it myself, so I thoughts a bit more details would help others too. Also, the files in the config folder are not .coffee but .js so I changed that too.

Cheers and thank you for keeping this project running! :)

  • Update README.md

My mistake in formatting, changed.

Co-authored-by: Amin Yahyaabadi [email protected]

Co-authored-by: hollossy [email protected] Co-authored-by: Amin Yahyaabadi [email protected]


Wednesday 2021-03-31 23:28:30 by David Gibson

spapr: Adjust default VSMT value for better migration compatibility

fa98fbfc "PC: KVM: Support machine option to set VSMT mode" introduced the "vsmt" parameter for the pseries machine type, which controls the spacing of the vcpu ids of thread 0 for each virtual core. This was done to bring some consistency and stability to how that was done, while still allowing backwards compatibility for migration and otherwise.

The default value we used for vsmt was set to the max of the host's advertised default number of threads and the number of vthreads per vcore in the guest. This was done to continue running without extra parameters on older KVM versions which don't allow the VSMT value to be changed.

Unfortunately, even that smaller than before leakage of host configuration into guest visible configuration still breaks things. Specifically a guest with 4 (or less) vthread/vcore will get a different vsmt value when running on a POWER8 (vsmt==8) and POWER9 (vsmt==4) host. That means the vcpu ids don't line up so you can't migrate between them, though you should be able to.

Long term we really want to make vsmt == smp_threads for sufficiently new machine types. However, that means that qemu will then require a sufficiently recent KVM (one which supports changing VSMT) - that's still not widely enough deployed to be really comfortable to do.

In the meantime we need some default that will work as often as possible. This patch changes that default to 8 in all circumstances. This does change guest visible behaviour (including for existing machine versions) for many cases - just not the most common/important case.

Following is case by case justification for why this is still the least worst option. Note that any of the old behaviours can still be duplicated after this patch, it's just that it requires manual intervention by setting the vsmt property on the command line.

KVM HV on POWER8 host: This is the overwhelmingly common case in production setups, and is unchanged by design. POWER8 hosts will advertise a default VSMT mode of 8, and > 8 vthreads/vcore isn't permitted

KVM HV on POWER7 host: Will break, but POWER7s allowing KVM were never released to the public.

KVM HV on POWER9 host: Not yet released to the public, breaking this now will reduce other breakage later.

KVM HV on PowerPC 970: Will theoretically break it, but it was barely supported to begin with and already required various user visible hacks to work. Also so old that I just don't care.

TCG: This is the nastiest one; it means migration of TCG guests (without manual vsmt setting) will break. Since TCG is rarely used in production I think this is worth it for the other benefits. It does also remove one more barrier to TCG<->KVM migration which could be interesting for debugging applications.

KVM PR: As with TCG, this will break migration of existing configurations, without adding extra manual vsmt options. As with TCG, it is rare in production so I think the benefits outweigh breakages.

Signed-off-by: David Gibson [email protected] Reviewed-by: Laurent Vivier [email protected] Reviewed-by: Jose Ricardo Ziviani [email protected] Reviewed-by: Greg Kurz [email protected] (cherry picked from commit 8904e5a75005fe579c28806003892d8ae4a27dfa) Signed-off-by: Greg Kurz [email protected]


Wednesday 2021-03-31 23:33:01 by David Rowley

Add Result Cache executor node

Here we add a new executor node type named "Result Cache". The planner can include this node type in the plan to have the executor cache the results from the inner side of parameterized nested loop joins. This allows caching of tuples for sets of parameters so that in the event that the node sees the same parameter values again, it can just return the cached tuples instead of rescanning the inner side of the join all over again. Internally, result cache uses a hash table in order to quickly find tuples that have been previously cached.

For certain data sets, this can significantly improve the performance of joins. The best cases for using this new node type are for join problems where a large portion of the tuples from the inner side of the join have no join partner on the outer side of the join. In such cases, hash join would have to hash values that are never looked up, thus bloating the hash table and possibly causing it to multi-batch. Merge joins would have to skip over all of the unmatched rows. If we use a nested loop join with a result cache, then we only cache tuples that have at least one join partner on the outer side of the join. The benefits of using a parameterized nested loop with a result cache increase when there are fewer distinct values being looked up and the number of lookups of each value is large. Also, hash probes to lookup the cache can be much faster than the hash probe in a hash join as it's common that the result cache's hash table is much smaller than the hash join's due to result cache only caching useful tuples rather than all tuples from the inner side of the join. This variation in hash probe performance is more significant when the hash join's hash table no longer fits into the CPU's L3 cache, but the result cache's hash table does. The apparent "random" access of hash buckets with each hash probe can cause a poor L3 cache hit ratio for large hash tables. Smaller hash tables generally perform better.

The hash table used for the cache limits itself to not exceeding work_mem

  • hash_mem_multiplier in size. We maintain a dlist of keys for this cache and when we're adding new tuples and realize we've exceeded the memory budget, we evict cache entries starting with the least recently used ones until we have enough memory to add the new tuples to the cache.

For parameterized nested loop joins, we now consider using one of these result cache nodes in between the nested loop node and its inner node. We determine when this might be useful based on cost, which is primarily driven off of what the expected cache hit ratio will be. Estimating the cache hit ratio relies on having good distinct estimates on the nested loop's parameters.

For now, the planner will only consider using a result cache for parameterized nested loop joins. This works for both normal joins and also for LATERAL type joins to subqueries. It is possible to use this new node for other uses in the future. For example, to cache results from correlated subqueries. However, that's not done here due to some difficulties obtaining a distinct estimation on the outer plan to calculate the estimated cache hit ratio. Currently we plan the inner plan before planning the outer plan so there is no good way to know if a result cache would be useful or not since we can't estimate the number of times the subplan will be called until the outer plan is generated.

The functionality being added here is newly introducing a dependency on the return value of estimate_num_groups() during the join search. Previously, during the join search, we only ever needed to perform selectivity estimations. With this commit, we need to use estimate_num_groups() in order to estimate what the hit ratio on the result cache will be. In simple terms, if we expect 10 distinct values and we expect 1000 outer rows, then we'll estimate the hit ratio to be 99%. Since cache hits are very cheap compared to scanning the underlying nodes on the inner side of the nested loop join, then this will significantly reduce the planner's cost for the join. However, it's fairly easy to see here that things will go bad when estimate_num_groups() incorrectly returns a value that's significantly lower than the actual number of distinct values. If this happens then that may cause us to make use of a nested loop join with a result cache instead of some other join type, such as a merge or hash join. Our distinct estimations have been known to be a source of trouble in the past, so the extra reliance on them here could cause the planner to choose slower plans than it did previous to having this feature. Distinct estimations are also fairly hard to estimate accurately when several tables have been joined already or when a WHERE clause filters out a set of values that are correlated to the expressions we're estimating the number of distinct value for.

For now, the costing we perform during query planning for result caches does put quite a bit of faith in the distinct estimations being accurate. When these are accurate then we should generally see faster execution times for plans containing a result cache. However, in the real world, we may find that we need to either change the costings to put less trust in the distinct estimations being accurate or perhaps even disable this feature by default. There's always an element of risk when we teach the query planner to do new tricks that it decides to use that new trick at the wrong time and causes a regression. Users may opt to get the old behavior by turning the feature off using the enable_resultcache GUC. Currently, this is enabled by default. It remains to be seen if we'll maintain that setting for the release.

Additionally, the name "Result Cache" is the best name I could think of for this new node at the time I started writing the patch. Nobody seems to strongly dislike the name. A few people did suggest other names but no other name seemed to dominate in the brief discussion that there was about names. Let's allow the beta period to see if the current name pleases enough people. If there's some consensus on a better name, then we can change it before the release. Please see the 2nd discussion link below for the discussion on the "Result Cache" name.

Author: David Rowley Reviewed-by: Andy Fan, Justin Pryzby, Zhihong Yu Tested-By: Konstantin Knizhnik Discussion: https://postgr.es/m/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com Discussion: https://postgr.es/m/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com


Wednesday 2021-03-31 23:41:07 by Rory Dale

2021-03-31

Wednesday, March 31st, 2021 - the most overlooked folk-rock LPs of the 1960s show! First thing I read this morning was a Rolling Stone article on Richard Thompson's new memoir Beeswing: Losing My Way and Finding My Voice. It tells a sad story of the fatal 1969 crash that killed the band's drummer Martin Lamble, and Thompson's girlfriend at the time Jeannie Franklin. The song I'll Keep It With Mine was played at Lamble's funeral. From here, I wanted to get into some of the deeper cuts of the folk-rock genre and found a superb article written by Richie Unterberger for Record Collector magazine in 2005. I played through some of the artists and built a playlist from here, and the show quickly evolved into a read-through of this article to accompany the selection of songs I had made. All credit to Richie Unterberger for this one!


< 2021-03-31 >