2,788,840 events, 1,148,120 push events, 1,883,093 commit messages, 147,605,090 characters
wip - mark CCNG build as staged appropriately
- Pulled in all CCNG-related source from old
capi_kpack_watcher
- Major, janky hack to run tests: tests want to actually apply a CRD
for the
Build
CR, which requires obtaining the actual YAML for the Build CRD; resolved this temporarily by checking it in- Very, very bad since we should be relying on the CRD version associated with the go module we pull in
- Also very bad and annoying because kpack CRDs are all ytt templates and they need to be run through ytt first (hurray for no easy fixes)
- Did some small, non-breaking refactoring to generate mocks with
counterfeiter
to work better with Ginkgo (which kubebuilder is using to bootstrap our repo)- Non-breaking because I left the mock generation with
mockery
since the old,spec
tests still want them
- Non-breaking because I left the mock generation with
- Also did another, non-breaking refactor to add a new constructor for
creating a CCNG client
- This one requires that you pass everything in explicitly, which I think we should be doing going forward anyway to avoid hidden side effects, improve declarativeness [sic], make code more easy to follow, and hopefully mitigate some future debt
- Non-breaking since I left the old constructor
- Controller tests are using mocks, but they're supposed to be
integration tests, so should eventually refactor to remove some amount
of this mocking
- Don't want to spin up an entire CCNG every time, but could at least
have it do real HTTP requests to a fake
ghttp
server for instance
- Don't want to spin up an entire CCNG every time, but could at least
have it do real HTTP requests to a fake
[#172847399]
Authored-by: Jaskanwal Pawar [email protected]
oh shit oh fuck OH GOD IT BEGINS
this fahr_celsius.c
file can also be found via a gist on my profile (although this is even more truncated than the gist version LMFAO)
Switch Velocity from existing toml4j+homebrew TOML serializer to night-config.
This allows us to allow many more valid configurations and allows us to eliminate a bunch of ugly hacks.
"9:40am. Let me chill a bit before I start.
9:50am. I've had a bunch of time to think about how I want to do this. I should just stick to the usual system - make the whole thing as close to the naive version and then do memoization in select places. That is the way to go.
10:10am. Ok, focus me. Enough of the Blattodea thread. Let me just read the latest chapter of Hellrage and then I will do some programming.
10:35am. Let me start. It is time to get the ball rolling.
To start with, I've decided on something during the night. Rather than transfering the range, it would be better if I transfer the specific lines. That way there is no need to do the splitting. It would negligibly slow down the transfer.
Mostly, I want this because it would be a pain to change the lines directly.
10:40am. I am distracted. Focus me, focus. It is always a hard slog.
Before I start this part seriously, I'll try prototyping it. Maybe that will give me some feeling for how to do blocks. Oh, yeah. It might not be good to assume that blocks during earlier stages and during typechecking are the same thing.
module Blocks
type Lang =
| Separator
| Line of string
How about I do something like this?
type File = Lang []
Hmmm, better make it an resizable array.
10:50am.
type Lang =
| Separator
| Line of string
type File = Lang ResizeArray
type Block = string []
type FileBlocks = Block ResizeArray
Let me start with this. The real thing will be even harder as I have those tricky comments to deal with, but I should be able to understand if I can make the system work with this.
Most of the things I need here, I already did in the prototype - the file parsing parts, but doing the mapping here is more involved.
10:55am. The first thing I need to do is to implement the remove and insert functions. For starters, I need to decide wther I want to keep around a type File = Lang ResizeArray
or to just operator on FileBlocks
.
...I have no idea how to handle this elegantly at all. All the operations that I need are not hard in isolation, but this problem feels far harder than it should be. I would not be surprised if I spent a few days playing with it.
11am.
let insert (blocks : FileBlocks) (pos : int) (lines : Lang []) =
()
Let me try implementing this.
11:05am.
let insert (blocks : FileBlocks) (pos : int) (lines : Lang []) =
let rec find c i =
if i < blocks.Count then
let c = c + blocks.[i].Length
if pos <= c then
else find c (i+1)
else failwith "Can't insert out of bounds."
This is not too hard, but I feel a great sense of inertia as I write this that I have to force myself to overcome. When I go through this, boy will it feel good.
First, let me convert the lines to blocks.
Maybe rather than inserting lines into blocks, it would be better to try inserting blocks into blocks.
One thing I am confused is how to handle blocks
being empty. I keep imagining it from a filled perspective.
Yeah, I can simplify things even more. Since a single block is just an array of lines, let me do something.
11:15am.
let insert_block (block : Block) pos (x : Block []) =
assert (pos <= block.Length)
This is promising.
Now let me split the block at pos
.
...Let me do the chores here. I'll be back in a jiffy."
Readds kits to blood brothers, check on dream daemon tomorrow
"11:30am. I am back. Time to resume.
let insert_block (block : Block) pos (x : Block []) =
assert (pos <= block.Length)
let l,r = Array.splitAt pos block
Before I left I did this. Now all I need to do is merge this.
Array.concat [[|l|];x;[|r|]]
No, this is not quite right. I want to merge the left with the first block of the array, and the right with the last.
11:45am.
let insert_block (block : Block) pos (x : Block []) =
if Array.isEmpty x then [|block|] else
let l,r = Array.splitAt pos block
let x = Array.copy x
x.[0] <- Array.append l x.[0]
let last = x.Length-1
x.[last] <- Array.append x.[last] r
x
Yeah, this is wonderful. This is exactly what the insertion functionality should be. Now that I have this, it will make thinking about the rest easier.
...Oh, yes. The next thing on the list is to convert an array of lines to an array of blocks.
This should be easy.
type Line =
| Separator
| Text of string
type File = Line ResizeArray
Let me name it better.
11:55am.
let lines_to_blocks (lines : Line []) : Block [] =
let blocks = ResizeArray()
let block = ResizeArray()
let blocks_add () = blocks.Add(block.ToArray()); block.Clear()
lines |> Array.iter (function Separator -> blocks_add() | Text t -> block.Add(t))
blocks_add()
blocks.ToArray()
This should be good.
Now let me do insert
.
12:05pm.
type Line = Separator | Text of string
type Block = string []
let lines_to_blocks (lines : Line []) : Block [] =
let blocks = ResizeArray()
let block = ResizeArray()
let blocks_add () = blocks.Add(block.ToArray()); block.Clear()
lines |> Array.iter (function Separator -> blocks_add() | Text t -> block.Add(t))
blocks_add()
blocks.ToArray()
let insert_block (block : Block) pos (x : Block []) =
if Array.isEmpty x then [|block|] else
let l,r = Array.splitAt pos block
let x = Array.copy x
x.[0] <- Array.append l x.[0]
let last = x.Length-1
x.[last] <- Array.append x.[last] r
x
let insert (blocks : Block ResizeArray) (pos : int) (lines : Line []) =
let rec loop pos i =
let l = blocks.[i].Length
if pos <= l then
blocks.RemoveAt(i)
blocks.AddRange(insert_block blocks.[i] pos (lines_to_blocks lines))
else loop (pos-l) (i+1)
loop pos 0
I think this is a pretty decent approximation of how block insertion should be done.
12:10pm. In the real thing, I will have difficulty with comments since they will cause the block separation to be context dependent...
You know what, mutually recursive blocks and comments are too much of a pain in the ass. I will just have the comments be separators period. The same goes for and
. The later stage can merge them into a single block.
Let me figure out how to tackle the problem at hand first.
So here I have insertion done.
The next thing needs to be the opposite of that - removal.
12:20pm. Let me take a break here.
Agh...there is just no easy way to get through this. I'll do it one step at a time."
Create Christmas Tree
Christmas Tree Chirag is a pure Desi boy. And his one and only dream is to meet Santa Claus. He decided to decorate a Christmas tree for Santa on coming Christmas. Chirag made an interesting Christmas tree that grows day by day.
The Christmas tree is comprised of the following
Parts Stand Each Part is further comprised of Branches. Branches are comprised of Leaves.
How the tree appears as a function of days should be understood. Basis that print the tree as it appears on the given day. Below are the rules that govern how the tree appears on a given day.
Write a program to generate such a Christmas tree whose input is number of days.
Rules:
If tree is one day old you cannot grow. Print a message “You cannot generate christmas tree” Tree will die after 20 days; it should give a message “Tree is no more” Tree will have one part less than the number of days. E.g. On 2nd day tree will have 1 part and one stand. On 3rd day tree will have 2 parts and one stand On 4th day tree will have 3 parts and one stand and so on. Top-most part will be the widest and bottom-most part will be the narrowest. Difference in number of branches between top-most and second from top will be 2 Difference in number of branches between second from top and bottom-most part will be 1 Below is an illustration of how the tree looks like on 4th day
Input Format: First line of input contains k - the number of inputs The next k lines denote the number of days N
Output Format: Print Christmas Tree for given N
OR
Print “You cannot generate christmas tree” if N <= 1
OR
Print “Tree is no more” if N > 20
Constraints: 0<= N <=20 Example: Input: k = 2
N = 1
N = 2 Output: You cannot generate christmas tree *
Reduce result discount in conSize
Ticket #18282 showed that the result discount given by conSize was massively too large. This patch reduces that discount to a constant 10, which just balances the cost of the constructor application itself.
Note [Constructor size and result discount] elaborates, as does the ticket #18282.
Reducing result discount reduces inlining, which affects perf. I found that I could increase the unfoldingUseThrehold from 80 to 90 in compensation; in combination with the result discount change I get these overall nofib numbers:
Program Size Allocs Runtime Elapsed TotalMem
boyer -0.3% +5.4% +0.7% +1.0% 0.0%
cichelli -0.3% +5.9% -9.9% -9.5% 0.0%
compress2 -0.4% +9.6% +7.2% +6.4% 0.0%
constraints -0.3% +0.2% -3.0% -3.4% 0.0%
cryptarithm2 -0.3% -3.9% -2.2% -2.4% 0.0% gamteb -0.4% +2.5% +2.8% +2.8% 0.0% life -0.3% -2.2% -4.7% -4.9% 0.0% lift -0.3% -0.3% -0.8% -0.5% 0.0% linear -0.3% -0.1% -4.1% -4.5% 0.0% mate -0.2% +1.4% -2.2% -1.9% -14.3% parser -0.3% -2.1% -5.4% -4.6% 0.0% puzzle -0.3% +2.1% -6.6% -6.3% 0.0% simple -0.4% +2.8% -3.4% -3.3% -2.2% veritas -0.1% +0.7% -0.6% -1.1% 0.0% wheel-sieve2 -0.3% -19.2% -24.9% -24.5% -42.9%
Min -0.4% -19.2% -24.9% -24.5% -42.9%
Max +0.1% +9.6% +7.2% +6.4% +33.3%
Geometric Mean -0.3% -0.0% -3.0% -2.9% -0.3%
I'm ok with these numbers, remembering that this change removes an exponential increase in code size in some in-the-wild cases.
I investigated compress2. The difference is entirely caused by this function no longer inlining
WriteRoutines.$woutputCodes = \ (w :: [CodeEvent]) -> let result_s1Sr = case WriteRoutines.outputCodes_$s$woutput w 0# 0# 8# 9# of (# ww1, ww2 #) -> (ww1, ww2) in (# case result_s1Sr of (x, ) -> map @Int @Char WriteRoutines.outputCodes1 x , case result_s1Sr of { (, y) -> y } #)
It was right on the cusp before, driven by the excessive result discount. Too bad!
Metric Decrease: T12227 T12545 T15263 T1969 T5030 T9872a T9872c
No error for no-ops in ungroup
and friends
Currently, ungroup
and unsubtract
throw a DataErr
, if a dataset is not
grouped or not subtracted. Why is this necessary?
For purely interactive use if just one or two datasets, an Exception is one way
of providing feedback to the user and remind him "hey, you did not even group
this before, are you sure you know what you are doing?".
However, when working with more datasets, it gets annoying and it's even worse
when I have a long-running script. Why does the script fail, if all I'm doing is
to accidentally ungroup something that was ungrouped before.
Here is what I do: I play around interactively, group with or that and then I
want to ungroup or unsubtract everything. Currently, I have to go one by one,
with code like this:
for i in ui.list_data_ids():
try:
ui.ungroup(i)
except sherpa.utils.err.DataErr:
# It's already ungrouped
pass
That's clunky, so I wonder: Why is it necessary to throw an Exception when a dataset is already ungrouped or unsubtracted? If I want to ungroup, why would it matter to me if data was previously grouped or not? What matters is that it's ungrouped after.
This PR removes some of those (in my opinion) annoying exceptions.
Contributor Covenant Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq
se_open_data/utils/deployment.rb - performs rsync deployment
Isolates the rsync messiness in one place, and avoid needing to tweak rsync work-arounds and hacks in multiple places.
Specifically, I want to allow deployments to the local filesystem, when run as a cron-job on the develppment server. This makes the deployment simpler, and more secure (no need to deploy ssh keys etc.)
Also, I want to fix the bug in which files are deployed with crazy ownership (typically, user ubuntu, group lxd, but this presumably depends on the deploying user's set up). Ownership should be settable to something like 'www-data'.
And if possible, avoid using rsync and ssh. Use rsync in a way which should work on rsync servers, and especially with rsync to accounts with log-ins constrained to allow only rsync in a specific way with a specific key. Because we will probably want to enforce that kind of security at some point in the future.
This is still ugly. Perhaps we should use capistrano or something else. But I hope it will do.
updated with new language
We're bringing coaching out of the C-suite and into the hands of healthcare and human services employees. Our tech-powered total well-being solution allows employers to support their workers through all six dimensions of wellness (Emotional, Occupational, Physical, Social, Intellectual, and Spiritual) by matching employees to their best fit coaches, best fit offerings, and best fit wellness providers.
Remember how we need atmos to breath and how there was a missing pipe from air to distro because haha brain fucky (#309)
Dex, you can copy my code, but you will never find any vital information!!!
Fuck you.
soon I can edit creatures, just need to connect the UI
I am screaming inside from thinking about the anatomy editor why did I do this to myself? Because it's kinda cool? Oh shit that's the reason right. Fuck...
Remove Google Analytics
Thanks for contributing to the FOSS ecosystem with your product!
I do have one question for you, though: would you be willing to replace the Google Analytics tracking code with something that respects user privacy and security?
There are much better alternatives to Google Analytics, which can be seen here.
As Google have been at the scene in the majority of court cases regarding devious abuse of user privacy, we should seek to eliminate their influence from something they can't beat:
Free, Open-Source Software.
Instead of relying on a proprietary invasive product to generate analytical data about your project, perhaps you could try a FOSS alternative? I have gone to the trouble of creating a table, which details the advantages of each FOSS analytics solution:Here is a table of the annoyances of these analytics solutions.
| | Fathom | Freshlytics | Plausible | Goatcounter | Google [Analytics] | |------- | ------- | ----------- | --------- | ----------- | ---------------- | | Issues | cookies | (none) | (none) | (none) | Requires a Privacy Policy; collects IP information; Social interactions; Browsing history; GDPR cookie notice is required; ability to opt-out of tracking; creates an advertising profile of users |
Anyway, I hope we can work together to replace Google Analytics in your project! I love seeing FOSS move away from dependencies on proprietary technologies. I can't wait to see what happens!
Thanks!
I really fucking hate OpenDocument XML bullshit. Contact boxes are going to span across pages, nothing I can do about that apparently. I should be able to condense them... somehow.