3,094,604 events, 1,570,645 push events, 2,449,354 commit messages, 224,773,078 characters
Add async server.start() function
Previously, server startup worked like this:
new ApolloServer
- If no gateway, calculate schema and schema derived data immediately
- If gateway, kick off gateway.load from the end of the constructor, and if it async-throws, log an error once and make the server kinda broken forever
- At various spots in the framework integration code, call (but don't await)
the protected
willStart
function, which is an async function that first waits for the gateway to load the schema if necessary and then runs serverWillStart plugin functions; save the Promise returned by calling this. - At request time in the framework integration code, await that Promise. And also, if there's no schema, fail with an error.
Now server startup works like this:
- ApolloServer represents its state explicitly with a new ServerState
-
new ApolloServer
- If no gateway, initialize all the schema-derived state directly like before (though the state now lives inside ServerState)
- If gateway, the constructor DOES NOT KICK OFF
gateway.load()
- You can now call
await server.start()
yourself, which will first awaitgateway.load
if necessary, and then await all serverWillStart calls. - If you're using
apollo-server
rather than an integration,server.listen()
will just transparently do this for you; explicitstart()
is just for integrations! - The integration places that used to call willStart now call
server.ensureStarting()
instead which will kick off server.start in the background if you didn't (and log any errors thrown). - The places that used to await promiseWillStart no longer do so; generally
right after that code we end up calling
graphqlServerOptions
graphqlServerOptions
now awaitsserver.ensureStarted
which will start the server if necessary and throw if it threw.
The overall change to user experience:
- If you're using
apollo-server
, startup errors will causelisten
to reject; no code changes are necessary. - If you're using an integration you are encouraged to call
await server.start()
yourself immediately after the constructor, which will let you detect startup errors. - But if you don't do that, the server will call
start
itself eventually. When you try to execute your first GraphQL request,start
will happen if it hasn't already. Also an integration call likeserver.applyMiddleware
will initiate a backgroundstart
. If startup fails, the startup error will be logged on every failed graphql request, not just the first time like happened before. - If you have your own ApolloServer subclass that calls the protected
willStart
method, it won't work before that method is gone. Consider whether you can eliminate that call by just callingstart
, or perhaps callensureStarting
instead.
This is close enough to backwards-compatible to be appropriate for a v2 minor
release. We are likely to make start()
required in Apollo Server 3 (other than
for apollo-server
).
Also:
- Previously we used the deprecated
ApolloServer.schema
field to determine whether to install ApolloServerPluginInlineTrace, which we want to have active by default for federated schemas only. If you're using a gateway, this field isn't actually set at the time that ensurePluginInstantiation reads it. That's basically OK because we don't want to turn on the plugin automatically in the gateway, but in the interest of avoiding use of the deprecated field, I refactored it so thatApolloServerPluginInlineTrace
is installed by default (ie, if you don't install your own version or installApolloServerPluginInlineTraceDisabled
) without checking the schema, and then (if it's installed automatically) it decides whether or not to be active by checking the schema atserverWillStart
time. - Similarly, schema reporting now throws in its
serverWillStart
if the schema is federated, instead of inensurePluginInstantiation
. (This does mean that if you're not using the newstart()
orapollo-server
, that failure won't make your app fail as fast as if theApolloServer
constructor threw.) - Fix some fastify tests that used a fixed listen port to not do that.
- I am doing my best to never accidentally run
prettier
on whole files and instead to very carefully select specific blocks of the file to format them several times per minute. Apparently I screwed up once and ran it once onpackages/apollo-server-core/src/ApolloServer.ts
. The ratio of "prettier changes" to "actual changes" in that file is low enough that I'd rather just leave the changes in this PR rather than spending time carefully reverting them. (It's one of the files I work on the most and being able to keep it prettier-clean will be helpful anyway.) - Replace a hacky workaround for the lack of
start
in the op reg tests! - Replace a use of a
Barrier
class I added recently in tests with the@josephg/resolvable
npm package, which does basically the same thing. Use that package in new tests and in the core state machine itself. - While running tests I found that some test files hung if run separately due to
lack of cleanup. I ended up refactoring the cache tests to:
- make who is responsible for calling cache.close more consistent
- make the Redis client mocks self-contained mocks of the ioredis API instead of starting with an actual ioredis implementation and mocking out some internals
- clean up Jest fake timers when a certain test is done I'm not super certain exactly which of these changes fixed the hangs but it does seem better this way. (Specifically I think the fake timer fix, which I did last, is what actually fixed it, but the other changes made it easier for me to reason about what was going on.) Can factor out into another PR if helpful.
Fixes #4921. Fixes apollographql/federation#335.
TODO:
- Go through all docs and READMEs that have 'FIXME start' and add calls to start. This involves verifying that you can actually do top-level await in the contexts that matter. (eg if it turns out that you really can't call await before you assign a handler in Lambda, that's interesting and may require some other changes to this PR!)
- Actually document start() in the apollo-server reference
- Document start() in all the integrations references
- CHANGELOG
- consider whether removing the protected willStart function is OK
Created Text For URL [www.sunnewsonline.com/a-taste-of-hell-kerosene-explosion-makes-life-horrible-for-11-year-old-girl/]
Fixed Fox TF Code + Misc
-Went through the code and changed all instances of $fox (in relation to the transformation) to $foxgirl because $fox is used for something else in the official version of the game.
-Added cow to list of beast types.
-Copied animation code for wolf to use for fox, copied animation code for pig to use for cow. Later I'll need to make unique art.
-Made it so you can always confess to Alex as long as you're not currently dating. Later, I will write a scene where Alex will reject you if love is not high enough. For now it's just automatic. I noticed a little heart shows up in the social screen when you're dating Robin, but that doesn't happen for Alex so I may need to find that code and add it.
-Added <<farm_work_update>> to try to fix error when you get assaulted after sleeping in the barn. Still not working, but now I think it's because I switched to cow instead of pigs. More work likely needed to add cows as they were not originally in the game.
Skin code initial set up
Oh gosh skins... oh boy will this suck.
fix: implement a higher ratelimit for invite caching as Discord API developers are all extremely idiotic and think that we're abusing their god-awful api
New data: 2021-03-03: See data notes.
Revise historical data: cases (BC, MB, ON, SK).
Note regarding deaths added in QC today: “19 new deaths, for a total of 10,426: 2 deaths in the last 24 hours, 8 deaths between February 24 and March 1, 8 deaths before February 24, 1 death at an unknown date.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.
Recent changes:
2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:
- Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
- Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
- The file codebook.csv has been moved to the directory “other”.
We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.
- 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.
Vaccine datasets:
- 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
- 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
- 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
- 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.
Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.
SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.
For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.
HOLY CRAP. I started this code at like 7 pm and I finished it at 12:33 AM. Honestly a lot of fun and was pretty tough. Had some trouble when sendkeying my tweet message because certain letters would be cut off. Fixed it by adding a Webdriver wait thingy. The code works though, now I just have to make a dedicated Pascal Siakam twitter account instead of using some random one!
start styling happy hour, party time, breakfast,lunch, dinner in same class
Current: Modal Spinner, saved user data into db after creation of account, download user data after signin (implemented a way to keep track of account setup completion), commented & organized
Next: Finish "try & catch"s, Comment everything
Comment: Saludos, from the past! Yesterday I woke up at 3pm, so I haven't slept since, and won't sleep until 11:59pm more or less to fix schedule, so this might not be the last commit of the night/day idk. This part was, honestly, so boring xD ffs, but at least I did it and implemented it pretty well I think, finishing it was soooo satisfactory, but I can already tell that finishing the "try & catch"s will be boring as fuck too. Commenting will be meh, and then to design the quiz paaaaaage, I really want to do that! Well ^^, here's today's link: https://youtu.be/u9rT73uzVKI, if I already shared this then it should say something about how much I like it xD. Sometimes the videos augment the feeling that the song creates, but also 「Cö shu Nie」(I know) are so fucking good. Bye, and I hope that you're enjoying my app ^^ this is only my first project, I have more plans on the way, I just need to start somewhere and this was great to start. Huge things on the way 04/03/21. (d/m/y because we don't track time by 1hr 23secs 41mins)
"10:10am. I am up. Is the PL thread still not up yet. This is the longest I had to wait for it ever. Wow, it is still not out. Nevermind it then.
Let me chill just 5-10m.
10:25am. Let me start.
This applying to companies thing has been driving me insane. But it is good that I am going through this. I need to look at it from every available perspective.
I've been focusing too much on the benefits and too little on the disadvantages.
The disadvantage is that having sponsors would tie up my time, and once they go bust, a lot of my work would go to waste. Furthermore, most of the value captured from Spiral would go to them. Sure I could get 3k per month, but they would get large cost saving per each programmer. If using Spiral allows them to do the same kind of work with 3 programmers that would otherwise require 10, just how much would that win them.
A fair deal would be for them to pay royalties based on their cost savings. That would make my earnings astronomical.
And yet here I am getting concerned over not being taken advantage of.
What a fool I am.
10:30am. Seriously, if I still haven't crushed poker after some time, I should just make up my mind to do those 3k per month jobs for a single month. That will be enough to get me an upgrade. Hopefully Bitcoin will have crashed by that time. Its rise combined with Corona has sent computer component costs through the roof. It might not happen tomorrow, but probably by the end of next year.
3k per month is low, but I could easily get those and would not have to waste my time looking around or negotiating. That is the primary benefit of them. 6k would be better obviously, but I would not be able to get a much better rig than with 3k. Right now I do not need an upgrade.
10:35am. Now let me start.
It is finally time to do some work. This is where my real gains will come from.
I've made up my mind, I'll wait until I've gone through the MainStreamServer module and then I will just fire off the last batch of applications to everybody out there. I won't let this drag on anymore.
let cons_fulfilled l =
let rec loop olds = function
| Cons(old,next) when Promise.Now.isFulfilled next -> loop (PersistentVector.conj old olds) (Promise.Now.get next)
| _ -> olds
loop PersistentVector.empty l
type TypecheckerStream = EditorStream<ParserRes Promise, InferResult Stream>
let typechecker package_id module_id (path : string) top_env =
let rec run old_results env i (bss : TopOffsetStatement list list) =
match bss with
| b :: bs ->
match PersistentVector.tryNth i old_results with
| Some (b', _, env as s) when b = b' -> Cons(s,Promise(run old_results env (i+1) bs))
| _ ->
let rec loop old_results env i = function
| b :: bs ->
let x = Infer.infer package_id module_id env (bundle_statements b)
let adds = match x.top_env_additions with AOpen x | AInclude x -> x
let _,_,env as s = b,x,Infer.union adds env
Cons(s,promise_thunk (fun () -> loop old_results env (i+1) bs))
| [] -> Nil
loop old_results env i bss
| [] -> Nil
let rec loop r =
{new TypecheckerStream with
member _.Run(res) =
let r = r()
let r' =
r >>=* fun old_results ->
top_env >>= fun top_env ->
res >>- fun res ->
run (cons_fulfilled old_results) top_env 0 res.bundles
let a = Stream.mapFun (fun (_,x,_) -> x) r'
a, loop (fun () -> if Promise.Now.isFulfilled r' then r' else r)
}
loop (fun () -> Stream.nil)
10:45am. How complicated. Let me make a non-diffing version of this.
let x = Infer.infer package_id module_id env (bundle_statements b)
let adds = match x.top_env_additions with AOpen x | AInclude x -> x
let _,_,env as s = b,x,Infer.union adds env
Cons(s,promise_thunk (fun () -> loop old_results env (i+1) bs))
There 4 lines are most of the actual functionality.
let inline wdiff_fold f s x =
let s = s()
let p = promise_thunk_with (f s) x
p, fun () -> if Promise.Now.isFulfilled p then Promise.Now.get p else s
let inline wdiff_mapFold f s x =
let s = s()
let p = promise_thunk_with (f s) x
p >>-* fst, fun () -> if Promise.Now.isFulfilled p then snd (Promise.Now.get p) else s
Ah, this needs to be like so.
11:05am.
let typechecker package_id module_id top_env l : InferResult Stream =
let rec loop env = function
| l :: ls ->
let x = Infer.infer package_id module_id env l
let adds = match x.top_env_additions with AOpen x | AInclude x -> x
let env = Infer.union adds env
Cons(x,promise_thunk_with (loop env) ls)
| [] ->
Nil
promise_thunk_with (loop top_env) l
This is how easy it would be without all the diff nonsense. Now let me do that as well.
11:40am.
type TypecheckerState = {
package_id : int
module_id : int
top_env : TopEnv Promise
results : (Bundle * InferResult * TopEnv) Stream
}
let wdiff_typechecker (state : TypecheckerState) l =
let rec loop env = function
| l :: ls ->
let x = Infer.infer state.package_id state.module_id env l
let adds = match x.top_env_additions with AOpen x | AInclude x -> x
let env = Infer.union adds env
Cons((l,x,env),promise_thunk_with (loop env) ls)
| [] ->
Nil
let rec diff env = function
| Cons((b,_,env as x),next), b' :: bs when b = b' ->
if Promise.Now.isFulfilled next then Cons(x,promise_thunk_with (diff env) (Promise.Now.get next,bs))
else Cons(x,promise_thunk_with (loop env) bs)
| _,bs -> loop env bs
let results =
state.top_env >>=* fun top_env ->
state.results >>= fun r ->
l >>- fun l -> diff top_env (r,l)
Stream.mapFun (fun (_,x,_) -> x) results, {state with results = results}
Wow, just wow. This is it. This is the form I've been looking for.
This is much better than the old version. It is just so much clearer. The refactor has been worth it.
I love this. Clarity is compositional. If you do not understand one of the steps, you won't understand the follow ups either.
11:50am.
type ModuleId = int
type DiffableFileHierarchyT<'a,'b> =
| File of path: string * name: string option * 'a
| Directory of name: string * DiffableFileHierarchyT<'a,'b> list * 'b
type DiffableFileHierarchy =
DiffableFileHierarchyT<
(InferResult Stream * (ModuleId * TopEnv Promise)) option * ParserRes Promise * TypecheckerStream option,
(ModuleId * TopEnv Promise) option
>
type MultiFileStream = EditorStream<DiffableFileHierarchy list, Map<string,InferResult Stream> * TopEnv Promise>
Oh boy, now comes this.
12:40pm. I am grinding away at it.
// Rather than just throwing away the old results, diff returns the new tree with as much useful info from the old tree as is possible.
let diff_order_changed old new' =
let mutable same_files = true
let mutable same_order = true
let rec elem (o,n) =
match o,n with
// In `n`, `meta` and `tc` fields are None.
| File(path,name,(_,p,tc)) & o,File(path',name',(_,p',_)) when path = path' && name = name' ->
if same_files then
if Object.ReferenceEquals(p,p') then o
else same_files <- false; File(path,name,(None,p',tc))
else File(path,name,(None,p',None))
| Directory(name,l,o), Directory(name',l',o') when name = name' -> Directory(name,list (l,l'),if same_files then o else o')
| _, n -> same_order <- false; n
and list = function
| o :: o', n :: n' -> elem (o,n) :: (if same_order then list (o', n') else n')
| [], [] -> []
| _, n -> same_order <- false; n
list (old,new')
let inline multi_file_run on_unchanged_file on_changed_file top_env_empty create_stream post_process_result union in_module package_id top_env files =
let rec changed (module_id,top_env as i) x =
match x with
| File(path,_,(Some (r,o),_,_)) ->
on_unchanged_file path r
x, o
| File(path,name,(None,res,tc)) ->
let tc : EditorStream<_,_> = match tc with Some tc -> tc | None -> create_stream package_id module_id path top_env
let r,tc = tc.Run res
on_changed_file path r
let top_env_additions =
let adds = post_process_result r
match name with
| Some name -> adds >>-* in_module name
| None -> adds
let o = module_id+1, top_env_additions
File(path,name,(Some (r,o),res,Some tc)),o
| Directory(name,l,Some o) -> Directory(name,l,Some o), o
| Directory(name,l,None) ->
let l,(module_id,top_env_adds) = changed_list i l
let o = module_id, top_env_adds >>-* in_module name
Directory(name,l,Some o),o
and changed_list (module_id,top_env) l =
let o = module_id, Promise.Now.withValue(top_env_empty)
let l,(_,o) =
List.mapFold (fun (top_env, (module_id, top_env_adds as o)) x ->
let i = module_id, top_env
let x,(module_id,top_env_adds') = changed i x
let union a b = a >>=* fun a -> b >>- fun b -> union a b
let top_env = union top_env_adds' top_env
let o = module_id, union top_env_adds' top_env_adds
x,(top_env,o)
) (top_env,o) l
l,o
let i = 0, top_env
let l,(_,top_env_adds) = changed_list i files
top_env_adds, l
Figuring out how to simplify all of this is not an easy thing. It would be a lot easier if not for the directories.
I am getting some ideas for an alterantive design."
HEEHEE WOW LOOK AT ME I'M A FULPSTATION CODER
OH LOOK AT MY BEAUTIFUL "MODULAR" CODE THATS ONLY MODULAR WHEN THE FUCKING HEADCODER ENFORCES IT! MY CODE IS SO FUCKING ABSOLUTELY MODULAR FROM OVERWRITING AN ESSENTIAL TG FUNCTION, JUST BECAUSE IT DOESN'T EDIT A TG FILE THAT WAY! YEP! NO COMPLAINTS FROM THE HEADCODER! NO COMPLAINTS FROM A FELLOW CODER! I WAS JUST CODING AND OVERWROTE AN ENTIRE FUNCTION FOR MY STUPID FUCKING FEATURE! EDITING TG FILES? SOUNDS LIKE YOU'RE JUST A FUCKING SHITCODER. PARDON MY INNABILITY TO FUCKING TALK PERRPEDRLY AS I FUCKING BREAK THE ENTIRE GAME BY A SINGLE OVERWRITE, RESULTING IN THOUSANDS (AND YES, AGGONNIZZINGG FUCKING) RUNTIMES, CRASHING THE SERVER! Honestly fulp coders are so fucking stupid. nobody who actually codes tries to modularize it. Its an inbetween semi-modular/non-modular at most. And when people ARE ACTUALLY having working features, they do it by editing TG files. half of the fulp coders are okay while the other half are completely fucking real life brain damaged. Honestly, why the FUCK would you MODULARIZE something that WILL BREAK no matter what? WHAT FUCKING REASON do you have to tell me to modularize my code for something that NOBODY other than your stupid, fucking horrible programming self had a problem with. I hate Baycode, and I think that most of t heir coders aren't the best (same for TG) but you know what they do right? They let people code without restraints. As long as something is a good feature, as long as maintainers aren't complaining, they let it be. OH, AND TO FUCKING TOP IT ALL OFF, AND THIS, THIS IS FUCKING UNFORGIVABLE, ON THE REPOSITORY'S PAGE, THE FUCKING LINK THAT IS FOR "DISCORD" DOESN'T EVEN REDIRECT TO A FULP CODERBUS. WHAT DO YOU WANT ME TO DO? JOIN YOUR SHITTY FUCKING DISCORD SERVER? Fuck fulp coders. fucking shitty ass coders.
gdb: set current thread in sparc_{fetch,collect}_inferior_registers (PR gdb/27147)
PR 27147 shows that on sparc64, GDB is unable to properly unwind:
Expected result (from GDB 9.2):
#0 0x0000000000108de4 in puts ()
#1 0x0000000000100950 in hello () at gdb-test.c:4
#2 0x0000000000100968 in main () at gdb-test.c:8
Actual result (from GDB latest git):
#0 0x0000000000108de4 in puts ()
#1 0x0000000000100950 in hello () at gdb-test.c:4
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
The first failing commit is 5b6d1e4fa4fc ("Multi-target support"). The cause of the change in behavior is due to (thanks for Andrew Burgess for finding this):
- inferior_ptid is no longer set on entry of target_ops::wait, whereas it was set to something valid previously
- deep down in linux_nat_target::wait (see stack trace below), we fetch the registers of the event thread
- on sparc64, fetching registers involves reading memory (in sparc_supply_rwindow, see stack trace below)
- reading memory (target_ops::xfer_partial) relies on inferior_ptid being set to the thread from which we want to read memory
This is where things go wrong:
#0 linux_nat_target::xfer_partial (this=0x10000fa2c40 <the_sparc64_linux_nat_target>, object=TARGET_OBJECT_MEMORY, annex=0x0, readbuf=0x7feffe3b000 "", writebuf=0x0, offset=8791798050744, len=8, xfered_len=0x7feffe3ae88) at /home/simark/src/binutils-gdb/gdb/linux-nat.c:3697
#1 0x00000100007f5b10 in raw_memory_xfer_partial (ops=0x10000fa2c40 <the_sparc64_linux_nat_target>, readbuf=0x7feffe3b000 "", writebuf=0x0, memaddr=8791798050744, len=8, xfered_len=0x7feffe3ae88) at /home/simark/src/binutils-gdb/gdb/target.c:912
#2 0x00000100007f60e8 in memory_xfer_partial_1 (ops=0x10000fa2c40 <the_sparc64_linux_nat_target>, object=TARGET_OBJECT_MEMORY, readbuf=0x7feffe3b000 "", writebuf=0x0, memaddr=8791798050744, len=8, xfered_len=0x7feffe3ae88) at /home/simark/src/binutils-gdb/gdb/target.c:1043
#3 0x00000100007f61b4 in memory_xfer_partial (ops=0x10000fa2c40 <the_sparc64_linux_nat_target>, object=TARGET_OBJECT_MEMORY, readbuf=0x7feffe3b000 "", writebuf=0x0, memaddr=8791798050744, len=8, xfered_len=0x7feffe3ae88) at /home/simark/src/binutils-gdb/gdb/target.c:1072
#4 0x00000100007f6538 in target_xfer_partial (ops=0x10000fa2c40 <the_sparc64_linux_nat_target>, object=TARGET_OBJECT_MEMORY, annex=0x0, readbuf=0x7feffe3b000 "", writebuf=0x0, offset=8791798050744, len=8, xfered_len=0x7feffe3ae88) at /home/simark/src/binutils-gdb/gdb/target.c:1129
#5 0x00000100007f7094 in target_read_partial (ops=0x10000fa2c40 <the_sparc64_linux_nat_target>, object=TARGET_OBJECT_MEMORY, annex=0x0, buf=0x7feffe3b000 "", offset=8791798050744, len=8, xfered_len=0x7feffe3ae88) at /home/simark/src/binutils-gdb/gdb/target.c:1375
#6 0x00000100007f721c in target_read (ops=0x10000fa2c40 <the_sparc64_linux_nat_target>, object=TARGET_OBJECT_MEMORY, annex=0x0, buf=0x7feffe3b000 "", offset=8791798050744, len=8) at /home/simark/src/binutils-gdb/gdb/target.c:1415
#7 0x00000100007f69d4 in target_read_memory (memaddr=8791798050744, myaddr=0x7feffe3b000 "", len=8) at /home/simark/src/binutils-gdb/gdb/target.c:1218
#8 0x0000010000758520 in sparc_supply_rwindow (regcache=0x10000fea4f0, sp=8791798050736, regnum=-1) at /home/simark/src/binutils-gdb/gdb/sparc-tdep.c:1960
#9 0x000001000076208c in sparc64_supply_gregset (gregmap=0x10000be3190 <sparc64_linux_ptrace_gregmap>, regcache=0x10000fea4f0, regnum=-1, gregs=0x7feffe3b230) at /home/simark/src/binutils-gdb/gdb/sparc64-tdep.c:1974
#10 0x0000010000751b64 in sparc_fetch_inferior_registers (regcache=0x10000fea4f0, regnum=80) at /home/simark/src/binutils-gdb/gdb/sparc-nat.c:170
#11 0x0000010000759d68 in sparc64_linux_nat_target::fetch_registers (this=0x10000fa2c40 <the_sparc64_linux_nat_target>, regcache=0x10000fea4f0, regnum=80) at /home/simark/src/binutils-gdb/gdb/sparc64-linux-nat.c:38
#12 0x00000100008146ec in target_fetch_registers (regcache=0x10000fea4f0, regno=80) at /home/simark/src/binutils-gdb/gdb/target.c:3287
#13 0x00000100006a8c5c in regcache::raw_update (this=0x10000fea4f0, regnum=80) at /home/simark/src/binutils-gdb/gdb/regcache.c:584
#14 0x00000100006a8d94 in readable_regcache::raw_read (this=0x10000fea4f0, regnum=80, buf=0x7feffe3b7c0 "") at /home/simark/src/binutils-gdb/gdb/regcache.c:598
#15 0x00000100006a93b8 in readable_regcache::cooked_read (this=0x10000fea4f0, regnum=80, buf=0x7feffe3b7c0 "") at /home/simark/src/binutils-gdb/gdb/regcache.c:690
#16 0x00000100006b288c in readable_regcache::cooked_read<unsigned long, void> (this=0x10000fea4f0, regnum=80, val=0x7feffe3b948) at /home/simark/src/binutils-gdb/gdb/regcache.c:777
#17 0x00000100006a9b44 in regcache_cooked_read_unsigned (regcache=0x10000fea4f0, regnum=80, val=0x7feffe3b948) at /home/simark/src/binutils-gdb/gdb/regcache.c:791
#18 0x00000100006abf3c in regcache_read_pc (regcache=0x10000fea4f0) at /home/simark/src/binutils-gdb/gdb/regcache.c:1295
#19 0x0000010000507920 in save_stop_reason (lp=0x10000fc5b10) at /home/simark/src/binutils-gdb/gdb/linux-nat.c:2612
#20 0x00000100005095a4 in linux_nat_filter_event (lwpid=520983, status=1407) at /home/simark/src/binutils-gdb/gdb/linux-nat.c:3050
#21 0x0000010000509f9c in linux_nat_wait_1 (ptid=..., ourstatus=0x7feffe3c8f0, target_options=...) at /home/simark/src/binutils-gdb/gdb/linux-nat.c:3194
#22 0x000001000050b1d0 in linux_nat_target::wait (this=0x10000fa2c40 <the_sparc64_linux_nat_target>, ptid=..., ourstatus=0x7feffe3c8f0, target_options=...) at /home/simark/src/binutils-gdb/gdb/linux-nat.c:3432
#23 0x00000100007f8ac0 in target_wait (ptid=..., status=0x7feffe3c8f0, options=...) at /home/simark/src/binutils-gdb/gdb/target.c:2000
#24 0x00000100004ac17c in do_target_wait_1 (inf=0x1000116d280, ptid=..., status=0x7feffe3c8f0, options=...) at /home/simark/src/binutils-gdb/gdb/infrun.c:3464
#25 0x00000100004ac3b8 in operator() (__closure=0x7feffe3c678, inf=0x1000116d280) at /home/simark/src/binutils-gdb/gdb/infrun.c:3527
#26 0x00000100004ac7cc in do_target_wait (wait_ptid=..., ecs=0x7feffe3c8c8, options=...) at /home/simark/src/binutils-gdb/gdb/infrun.c:3540
#27 0x00000100004ad8c4 in fetch_inferior_event () at /home/simark/src/binutils-gdb/gdb/infrun.c:3880
#28 0x0000010000485568 in inferior_event_handler (event_type=INF_REG_EVENT) at /home/simark/src/binutils-gdb/gdb/inf-loop.c:42
#29 0x000001000050d394 in handle_target_event (error=0, client_data=0x0) at /home/simark/src/binutils-gdb/gdb/linux-nat.c:4060
#30 0x0000010000ab5c8c in handle_file_event (file_ptr=0x10001207270, ready_mask=1) at /home/simark/src/binutils-gdb/gdbsupport/event-loop.cc:575
#31 0x0000010000ab6334 in gdb_wait_for_event (block=0) at /home/simark/src/binutils-gdb/gdbsupport/event-loop.cc:701
#32 0x0000010000ab487c in gdb_do_one_event () at /home/simark/src/binutils-gdb/gdbsupport/event-loop.cc:212
#33 0x0000010000542668 in start_event_loop () at /home/simark/src/binutils-gdb/gdb/main.c:348
#34 0x000001000054287c in captured_command_loop () at /home/simark/src/binutils-gdb/gdb/main.c:408
#35 0x0000010000544e84 in captured_main (data=0x7feffe3d188) at /home/simark/src/binutils-gdb/gdb/main.c:1242
#36 0x0000010000544f2c in gdb_main (args=0x7feffe3d188) at /home/simark/src/binutils-gdb/gdb/main.c:1257
#37 0x00000100000c1f14 in main (argc=4, argv=0x7feffe3d548) at /home/simark/src/binutils-gdb/gdb/gdb.c:32
There is a target_read_memory call in sparc_supply_rwindow, whose return value is not checked. That call fails, because inferior_ptid does not contain a valid ptid, and uninitialized buffer contents is used. Ultimately it results in a corrupt stop_pc.
target_ops::fetch_registers can be (and should remain, in my opinion) independent of inferior_ptid, because the ptid of the thread from which to fetch registers can be obtained from the regcache. In other words, implementations of target_ops::fetch_registers should not rely on inferior_ptid having a sensible value on entry.
The sparc64_linux_nat_target::fetch_registers case is special, because it calls a target method that is dependent on the inferior_ptid value (target_read_inferior, and ultimately target_ops::xfer_partial). So I would say it's the responsibility of sparc64_linux_nat_target::fetch_registers to set up inferior_ptid correctly prior to calling target_read_inferior.
This patch makes sparc64_linux_nat_target::fetch_registers (and store_registers, since it works the same) temporarily set inferior_ptid. If we ever make target_ops::xfer_partial independent of inferior_ptid, setting inferior_ptid won't be necessary, we'll simply pass down the ptid as a parameter in some way.
I chose to set/restore inferior_ptid in sparc_fetch_inferior_registers, because
I am not convinced that doing so in an inner location (in sparc_supply_rwindow
for instance) would always be correct. We have access to the ptid in
sparc_supply_rwindow (from the regcache), so we could set inferior_ptid
there. However, I don't want to just set inferior_ptid, as that would make it
not desync'ed with current_thread ()
and current_inferior ()
. It's
preferable to use switch_to_thread instead, as that switches all the global
"current" stuff in a coherent way. But doing so requires a thread_info *
,
and getting a thread_info *
from a ptid requires a process_stratum_target *
. We could use current_inferior()->process_target()
in
sparc_supply_rwindow for this (using target_read_memory uses the current
inferior's target stack anyway). However, sparc_supply_rwindow is also used in
the context of BSD uthreads, where a thread stratum target defines threads. I
presume the ptid in the regcache would be the ptid of the uthread, defined by
the thread stratum target (bsd_uthread_target). Using
current_inferior()->process_target()
would look up a ptid defined by the
thread stratum target using the process stratum target. I don't think it would
give good results. So I prefer playing it safe and looking up the thread
earlier, in sparc_fetch_inferior_registers.
I added some assertions (in sparc_supply_rwindow and others) to verify that the regcache's ptid matches inferior_ptid. That verifies that the caller has properly set the correct global context. This would have caught (though a failed assertion) the current problem.
gdb/ChangeLog:
PR gdb/27147
* sparc-nat.h (sparc_fetch_inferior_registers): Add
process_stratum_target parameter,
sparc_store_inferior_registers): update callers.
* sparc-nat.c (sparc_fetch_inferior_registers,
sparc_store_inferior_registers): Add process_stratum_target
parameter. Switch current thread before calling
sparc_supply_gregset / sparc_collect_rwindow.
(sparc_store_inferior_registers): Likewise.
* sparc-obsd-tdep.c (sparc32obsd_supply_uthread): Add assertion.
(sparc32obsd_collect_uthread): Likewise.
* sparc-tdep.c (sparc_supply_rwindow, sparc_collect_rwindow):
Add assertion.
* sparc64-obsd-tdep.c (sparc64obsd_collect_uthread,
sparc64obsd_supply_uthread): Add assertion.
Change-Id: I16c658cd70896cea604516714f7e2428fbaf4301
[discussion point] Allow setting rpc-load response header
Hello,
I have a use-case where I'd like the application send back load reports back to the caller. This is to be used by load balancer to select the most appropriate peer.
My current plan was to simply use a middleware - this covers my use case.
However, I am unable to set the rpc-load
header which seems preferable -
this data clearly is rpc
related.
This change proposes to simply allow-list the rpc-load
header.
All applications would be able to set it indiscriminately. This opens us to a potential mis-use,
but I think this is fine - we're all adults here, and someone would need to do it on purpose.
Some alternatives that came to mind:
- use different header, like
lb-load
(load-balancing-load). It's possible, but kinda breaks the niceness ofrpc
headers - rather than creating allow list per header, create an allow list per middleware - we'd then knew who is exactly setting this. This seems tricky though
- create a new
SetRPCLoad
method, possibly similar to #2027. This is quite a bit more verbose, and we'd still need to allow anyone to set it (?) - move the whole rpc-load middleware into yarpc I'd rather not do this (yet?) - the middleware is/will change, will require bumping/updating. The load balancer work will also be very specific, so not sure if that belongs in yarpc (yet?).
Please share your thoughts :)
Made ANOTHER change to BaseRecord... yay..
with all this extra time. when the world and I should not have been trapped in this endless nightmare of purposeless wandering and unfulfillment, where people who have never ever lived pretend they are replicating what real life is like at the cost of everyone else's misery and pain and where they roll back the world like some continually derailed trainset being exploded by some psychopath version of gomez adams.
Adds useful comments to the vimscript for netCDF
I can barely remember how vimscript works at any given time, so to save myself a lot of trouble looking at this script, I've added some useful comments to make sense out of what magic is happening.
more black magic fucking bullshit, maybe it will decide to work now? who fucking knows
fuck you all hippocrit cult bullshit halfwhits, called PP HQ
Fixed Bug Where Fox TF Not Triggering + Alex Clamp
-Changed Alex Clamp and relationshipsclamp so Alex will no longer be locked out of max love and dominance, like other non-important NPCs.
-Fixed extensive typo and missing code in ejaculation.twee that was causing Fox Transformation to not trigger when fucking or being fucked by foxes. (same for cats and cattle)
fifth time or so I had to do this same fucking shit just because these idiot assholes want to walk around in very large circles and drag everyone else along who they systematically induce amnesia in based on teh traumatic and miserable nature of the fact that they are all baby selling baby rapers that never loved anyone and were never truly loved by anyone else.
when you know the girl you love already in love with another guy haha ^^! poor you :P
Changed some stuff. Go figure it out yourself go fuck yourself
You know Tori needs to pay for consistently just trying to be a little bitch that sabotaged me at points trying to frustrate progress just because like all of these weird bizarre hereditary sexual predators she was only ever formed towards one thing which is tricking conning and in general being not very nice to people. We're sorry daddy made you a whore, why is this the normal guys fault ? I wonder if she enjoyed watching that dog and her boss.
Upgraded to Grafana 7 Upgraded to influx 1.8
Merged queries in most graphs Added Usage by month: ab5g https://forum.netgate.com/post/954800 Gatway loss: Condoamanti VictorRobellini/pfSense-Dashboard#22
Changed lots of graphs around. I tinkered with other types of graphs but I stuck with the standard graph format so that I could view trends and spikes over a time period. I added interface information to each interface section - friendlyname - I would love to add it to the header, but I don't know how.
More changes than I can remember