2,021,557 events, 1,003,706 push events, 1,605,691 commit messages, 123,519,275 characters
Add __main__.py
to allow for a terminal interface add property loading
With the experience of some of my other projects at work, doing the terminal interface was familiar and not difficult (thankfully). This gives me a framework to expand this as time goes on. I also added functionality to import configurations from a yaml configuration file. I debated between yaml and json but since yaml doesn't require quite as much formatting, I decided to use yaml. I shifted some of the hard coded values to the config file and I will try to get that tested when I next have time to work on this (hopefully tomorrow, trying to put in ~an hour a day on this during the week).
I also added another unit/integration test, this time to test the loading of config yaml files. Pretty simple stuff but it will help to have that later on. Doing my best to conform to the same stringent work requirements surrounding python coding. Both for the best practices that they are but so that others that might see this can understand it better. With the terminal interface now open for use, and config files ready to use, I plan on working with the article parsing tomorrow and digging into the format that I was given from a professor I'm working with. They are using jsonl, which I'm kind of familiar with, part of me wishes it was just json though. Not sure how best to handle the variety of formats people might use to provide articles outside of just me, I'll probably look to that professor for some thoughts on that for ease of use. Ideally I would avoid having an import per file type (e.g. .txt and .json and .jsonl, so on and so forth) but we'll see what happens as I want to make this tool available for use by the group.
Tests passing, coverage raised but not 100% just yet. Will need to do some mocking to technically reach that peak. Ideally once this system is set up I can add some larger integration and functional tests to ensure it works properly at a larger scale.
pls no bobby table BUT user filtering in ledger!
also it spits out if there are no results found
STORY TIME
i don't like ORMs
its totally due to the job i just finished up at for a bit, i have taken the opinions of those that wrote the services that i still feel like i only lightly had anything to do with .-.
i just like the idea that I AM THE ONE in control of the SQL that gets generated
its a good feeling!
im still down for helper functions and such to do some of the heavily lifting and keeping things KISS
i guess you could say its kinda like i want the jquery of SQL tools
i don't wanna touch the raw dom (just kinda populating strings from scratch w/statements) but i also don't want the react of the sql world too (which is funny because i love react)
idk so basically anyways totally thats WHY i wrote this little weird conditional maker
it makes statements, hands them to the thing that SHOULD prepare them all nice and such
and if it doesn't
hello bobby!
storage: employ transactional idempotency to refresh mixed-success batches
This change builds on the introduction of idempotency properties introduced into the MVCC layer in #33001. It exploits this property to remove a previous restriction that could prevent transactions from refreshing and result in transaction retries. The restriction was that batches which had experienced some success while writing could not refresh even if another part of the batch required a refresh. This was because it was unclear which parts of the batch had succeeded in performing writes and which parts of the batch had not. Without this knowledge, it was unsafe to re-issue the batch because that could result in duplicating writes (e.g. performing an increment twice).
A lot has changed since then. We now have proper sequence numbers on requests,
we no longer send BeginTransaction
requests (which threw errors on replays), and
we now have an MVCC layer that is idempotent for writes within a transaction.
With this foundation in place, we can now safely re-issue any write within a
transaction that we're unsure about. As long as writes remain well sequenced
(writes to the same key aren't reordered), everything should behave as expected.
The best way to see that the change works is through the test cases in
TestTxnCoordSenderRetries
that now succeed. Without #33001, those all fail.
Because we can now refresh mixed-success batches, this commit allows us
to remove the MixedSuccessError
type entirely. No longer will it haunt
our sender interface with its ambiguity and awkward error wrapping. No
longer will it stand in the way of read refreshes with its ignorant
stubbornness. The days of its tyranny are over. The war is won.
Release note (performance improvement): Transactions are able to refresh their read timestamp even after the partial success of a batch.
Included a bunch of stuff + plans to redo code structure
I made a few minor changes throughout the code (in a bid to get it running). I realized, however, that my particle class design is stupid. No class should have that many friends. I thought about it and realized that most of particle's friends are actually body class methods that were made before I came up with the body class. When I wrote the body class, I modified particle's friends to work using bodies rather than arrays of particules, but that was really just a hack of a fix. The real fix is to integrate those friends into the body class. I marked the functions that I believe belong in the body class with "Body method" in particle.h. I will be making particle a friend of body. Expect that change in the next commit (and.... hopefully... working code?).
This change will probably prompt me to eliminate a lot of the particle files (most of the particle helper methods, for example). In order to conserve space, I will be distributing the various body methods over several files (sorta like I'm doing right now with particle).
Add files via upload
WHATS NEW: --fixed player movement so you cant press both left and right at the same time and have weird stuff happening --changed gdscript attached with player object to be PlayerScript, not that KinematicBody2D crap --made some other scripts in preparation for color assigning, and "getting" color from player to detect if it's the same color or not to have death sequence
TODO: --start assigning colors to player ball soon, including ways to switch --figure out how the hell im supposed to be able to do getters and setters for texture colors, since gdscript isnt easy like java where you can just do "this.getColor()" and have it return a value, I basically need to figure out how to turn the "this" in Java to GDscript equivalent which is A HUGE PAIN TO DO IT SEEMS.
Update README.md
I’m a web developer with a bachelor's degree in computer science and self-experience. I write blogs in my free time. I love to learn new technologies and share with others. The main aim of this website to provide PHP, Jquery, MySQL, PHP, and other web development tools for web development in Nepal tourism sectors.
Created Text For URL [www.theguardian.com/lifeandstyle/2020/jan/07/i-am-struggling-to-stay-sexually-aroused-with-my-younger-boyfriend]
"10am. I am up. Yesterday's vacation did me a lot of good. I know I said I would do it for an entire week, but I already feel like resuming.
Yes, it will be impossible to work for the same goals as before. Yes, classical logic is my enemy and is worthy of contempt. Yes, the constructive route will have to be traveled - without some posers trying to stab it in the back.
No, the time for that last part is not now. As much animosity I've gathered towards the mainstream lines of thinking, my real enemy is the low information value of ML experiments.
My programming style has been to go for stabs, but I need to change policies and go for explosives from here on out. Fundamentally, anything constructive math will give me, I can also get using inspired applications of randomized testing.
10:05am. So for the first time in 5 years, not only will I change specific goals, but the general direction I am going in.
It is time to get good.
I do not know whether there is cause to make Spiral, but shitty languages prospering do a lot damage long term and so I should do it as a service to both myself and humanity.
Once I get Spiral up to par, I will be able to tie up the low style and get the full power of what is possible for humans in the domain of programming.
Maybe going for Spiral is not the right choice, but going for Python or any other mainstream language is not the right choice either. So I need to do this. Hopefully it will give me some fame points that in the worst case I will be able to cash in somehow. If I am lucky and the time turns out to be on my side I won't need to, but it never hurts to plan for the worst.
10:10am. Right now it feels I can resume work on the parser lightly. I am going to aim for a few hours a day while I am still in my vacation period and then escalate from there.
In fact, putting a few hours a day might be the ideal pace. It got me that national championship win in the 00s, so it is not like it is an outright mistake.
10:10am. Let me do a little now. I want to see what I can do."
Jump Start MySQL
Preface: This book is an introduction to the basic concepts of working with a Relational Database Management System (RDBMS)—specifically, the popular, open source RDBMS MySQL. Like others Jump series, it aims to give you a head start in your understanding of the chosen technology. You’ll learn the basics quickly, in a friendly, (hopefully) pain-free way, and have a solid foundation to continue on in your learning.
Who should read this book: This book is aimed at those interested in working with data and want to learn how to use MySQL. To get the most out of some parts of this book, you should have some previous programming experience, although no specific language is required.
simple camera that is controlled by mouse movement. Fuck you, pygame.
Updated the README
Now that I've completed 2019's AoC, it was time to write up my little experience. I really loved that I pushed through and learned more about Dijkstra's and the BFS techniques. Well worth the pain. :)
Culture Switch Limitations Added limitations of switching to some cultures (affect both culture switch decisions and childhood events):
- To switch to Aqir culture group, you must be Qiraji or Nerubian.
- To switch to Demonic and Satyr culture group, you must have Demon trait or follow Sargerite religion.
- To switch to Nazja and Shath'yar culture group, you must have Void Being trait or follow Shath'yar religion.
- To switch to Undead culture group, you must have Undead trait or follow Death God religion.
[nightcolor] Expose some properties to d-bus
Summary: Currently, in order to retrieve the current screen color temperature applied to all screen as well other attributes of night color manager, one has to call nightColorInfo() periodically. This goes against well established patterns in d-bus world. It is recommended to expose a bunch of d-bus properties rather than have a method that returns all relevant properties stored in a JSON object.
The ugliest thing about this patch is that a lot of code is duplicated to emit the PropertiesChanged signal. Unfortunately, QtDBus doesn't take care of this and we are left with only two options - either do weird things with QMetaObject or manually emit the signal. I have picked the second option since it's more comprehensible and less magic is going on, but I have to admit that the chosen approach is ugly.
I hope that "Qt 6 will fix it."
CCBUG: 400418
Reviewers: #kwin, davidedmundson
Reviewed By: #kwin, davidedmundson
Subscribers: kwin
Tags: #kwin
Differential Revision: https://phabricator.kde.org/D25946
[MERGE] partner_autocomplete_address_extended: remove broken module and ensure base_address_extended effectively works
PURPOSE
Remove partner_autocomplete_address_extended and ensure base_address_extended effectively works.
SPECIFICATIONS
Commits a6e1eb9 and 8a1815b added among a lot of other things
a bridge module between partner_autocomplete
and base_address_extended
.
It is used only to redefine the _split_street_with_params
method from
base_address_extended
. This method is used to find street name, number and
number2 from an address, given a format coming from the country.
However the override completely fucks up the original method purpose and uses an hardcoded regex coming out of the blue. Parameters like country format is not taken into account which is annoying when trying to parse country dependent data.
Tests from base_address_extended
crash completely when used with the
partner_autocomplete_address_extended
implementation.
Considering the original complete specifications from commits """ For Name field or M2O, gives a list of companies Data comes from Odoo IAP Service """
Or specs found in the original task ID 1867818 pad """ If Extended Address module is installed, the street number should be correctly splitted Or the complete address is put in street2 if impossible to parse """
We think that this implementation is broken by design. Indeed there is no mention of street2 anywhere in the code, and this implementation ensure street will never be correctly split. We therefore remove it completely.
In this merge we also
- clean and improve tests in base_address_extended. Purpose is to make tests easier to understand and more data oriented. Tests about company / partner address fields are added to ensure coherency;
- reorganize python code of base_address_extended according to guidelines;
- rename compute / inverse methods of base_address_extended;
- remove dead code from base partner model;
See sub commits for more details.
LINKS
Task ID 2158302 PR #42678
Signed-off-by: Thibault Delavallee (tde) [email protected]
Fixed the typos that i added myself
Welp, its been there forever and no one noticed until flamy did. fuck you flamy.
CatEd continues to be a pain in the ass, might just do this shit with game maker
Added V1.0 JAR release.
Do whatever the fuck you want with it. :)
It helps if you actually publish your stupid publications instead of evaluating them IN BAND in the STUPID PUBLISHING MECHANISM WTF IS WRONG WITH YOU YOU IDIOT AHHHHHH!!!!!!!!!
"Hmm, why is my stack so deep in this detached, pub/sub model? Oh I'm sure it's nothing I guess the code is just that deep."
I am an idiot.
[READY]Medical Kiosks V3.0. New TGUI Interface, New functionality, some minor fixes. (#47578)
AKA: This shit again. About The Pull Request
So based on feedback I've been getting over the past month, the main issue with medical kiosks is that even as a roundstart, public medical analyzer, the cost on use at T1 isn't anywhere near helpful enough to warrant not breaking into medical storage and printing an analyzer. This go around I'm pretty much scrapping the dependence on upgrades in order to turn it into an economy reliant machine instead.
Now featuring so much info with all 4 scans, I had to put them into tabs!
Now, the machine begins with the full docket of information typically provided by the Advanced Medical Analyzer, but each section of information is an individual purchase. General Information is provided under "Patient Health", Issues where the player may realize something non-obvious is wrong can be found under "Symptom Based Checkup", and "Neuro/Radiological Scan" covers the host of Cellular/Radiation issues.
As a means of alleviating concerns about having the whole host of advanced medical scanner information available round-start, I've bumped up the minimum cost for each scan type to 10 credits, so for the whole set of information it'll cost you about 40 credits.
Quick video link showing how it works in practice: https://cdn.discordapp.com/attachments/184507411648741378/642437277632561182/2019-11-08_13-49-31.mp4
In addition to that, some sanity checks that were missing from the first couple PRs were added, so Ghosts and Borgs won't runtime trying to use a machine that only works on the living.
Bugfixes from the first time (I am so sorry about the line spacing) Have a working, functional TGUI that shows all the old Kiosk information plus what you can get off of medical analyzers that I skipped over
And these things if/when I get to it:
Adds emagged functionality.
Allow for crew to scan other crew using the machine.
In the meantime this is SUPER DNM until at least those first 3 are ironed out. Why It's Good For The Game
Helps to Enforce the Medical Kiosk as what I initially hoped it would function as, a money sink for Medbay. With the new budget changes, this means that crew who use the medical kiosk are actively paying every member of medbay.
Additionally, the feedback I got from literally everyone I've talked to has been pretty universal: The medical kiosk is pretty much worthless to use, even at shift start, because it's not worth upgrading and by the time you DO upgrade it, you can just print your own medical analyzer and skip the whole process. Changelog
cl add: Medical Kiosks now have more functionality available, including showing blood levels, virus information, and cumulative total health. add: You now now alt-click a Medical Kiosk to remove a medical scanner wand, so that you can scan someone else. add: Medical Kiosks now use TGUI-next. tweak: Now, the information in the medical kiosk is split up between 4 different scan types, General, Symptom based, Neuro/Radiologic, and Chemical Analysis scans. balance: Each medical kiosk scan costs a base 10 credits minimum. fix: Medical Kiosks don't runtime on ghosts and borgs anymore. /cl
Expierimenting with using Diesel ORM manager with PostgreSQL as backend.
Research about MongoDB and it seems to be too much trouble than it is worth, Managed to create friendly wrappers and logic around searching, upserting, and deleting records.
For future, all structs that are serializable into Postgres and defined as tables are defined in their seperate files such as:
clients.rs, accounts.rs, my_other_struct.rs
within these files is another struct that represents all the methods and logic with interacting just with that table.
Still a work in progress, and I got like 2 hours of sleep tonight, so the code probably looks like mush (pun-intended)
Update git submodules
- Update cinder from branch 'master'
-
Merge "Introduce flake8-import-order extension"
-
Introduce flake8-import-order extension
This adds usage of the flake8-import-order extension to our flake8 checks to enforce consistency on our import ordering to follow the overall OpenStack code guidelines.
Since we have now dropped Python 2, this also cleans up a few cases for things that were third party libs but became part of the standard library such as mock, which is now a standard part of unittest.
Some questions, in order of importance:
Q: Are you insane? A: Potentially.
Q: Why should we touch all of these files? A: This adds consistency to our imports. The extension makes sure that all imports follow our published guidelines of having imports ordered by standard lib, third party, and local. This will be a one time churn, then we can ensure consistency over time.
Q: Why bother. this doesn't really matter? A: I agree - but...
We have the issue that we have less people actively involved and less time to perform thorough code reviews. This will make it objective and automated to catch these kinds of issues.
But part of this, even though it maybe seems a little annoying, is for making it easier for contributors. Right now, we may or may not notice if something is following the guidelines or not. And we may or may not comment in a review to ask for a contributor to make adjustments to follow the guidelines.
But then further along into the review process, someone decides to be thorough, and after the contributor feels like they've had to deal with other change requests and things are in really good shape, they get a -1 on something mostly meaningless as far as the functionality of their code. It can be a frustrating and disheartening thing.
I believe this actually helps avoid that by making it an objective thing that they find out right away up front - either the code is following the guidelines and everything is happy, or it's not and running local jobs or the pep8 CI job will let them know right away and they can fix it. No guessing on whether or not someone is going to take a stand on following the guidelines or not.
This will also make it easier on the code reviewers. The more we can automate, the more time we can spend in code reviews making sure the logic of the change is correct and less time looking at trivial coding and style things.
Q: Should we use our hacking extensions for this? A: Hacking has had to keep back linter requirements for a long time now. Current versions of the linters actually don't work with the way we've been hooking into them for our hacking checks. We will likely need to do away with those at some point so we can move on to the current linter releases. This will help ensure we have something in place when that time comes to make sure some checks are automated.
Q: Didn't you spend more time on this than the benefit we'll get from it? A: Yeah, probably.
Change-Id: Ic13ba238a4a45c6219f4de131cfe0366219d722f Signed-off-by: Sean McGinnis [email protected]
-
Introduce flake8-import-order extension
This adds usage of the flake8-import-order extension to our flake8 checks to enforce consistency on our import ordering to follow the overall OpenStack code guidelines.
Since we have now dropped Python 2, this also cleans up a few cases for things that were third party libs but became part of the standard library such as mock, which is now a standard part of unittest.
Some questions, in order of importance:
Q: Are you insane? A: Potentially.
Q: Why should we touch all of these files? A: This adds consistency to our imports. The extension makes sure that all imports follow our published guidelines of having imports ordered by standard lib, third party, and local. This will be a one time churn, then we can ensure consistency over time.
Q: Why bother. this doesn't really matter? A: I agree - but...
We have the issue that we have less people actively involved and less time to perform thorough code reviews. This will make it objective and automated to catch these kinds of issues.
But part of this, even though it maybe seems a little annoying, is for making it easier for contributors. Right now, we may or may not notice if something is following the guidelines or not. And we may or may not comment in a review to ask for a contributor to make adjustments to follow the guidelines.
But then further along into the review process, someone decides to be thorough, and after the contributor feels like they've had to deal with other change requests and things are in really good shape, they get a -1 on something mostly meaningless as far as the functionality of their code. It can be a frustrating and disheartening thing.
I believe this actually helps avoid that by making it an objective thing that they find out right away up front - either the code is following the guidelines and everything is happy, or it's not and running local jobs or the pep8 CI job will let them know right away and they can fix it. No guessing on whether or not someone is going to take a stand on following the guidelines or not.
This will also make it easier on the code reviewers. The more we can automate, the more time we can spend in code reviews making sure the logic of the change is correct and less time looking at trivial coding and style things.
Q: Should we use our hacking extensions for this? A: Hacking has had to keep back linter requirements for a long time now. Current versions of the linters actually don't work with the way we've been hooking into them for our hacking checks. We will likely need to do away with those at some point so we can move on to the current linter releases. This will help ensure we have something in place when that time comes to make sure some checks are automated.
Q: Didn't you spend more time on this than the benefit we'll get from it? A: Yeah, probably.
Change-Id: Ic13ba238a4a45c6219f4de131cfe0366219d722f Signed-off-by: Sean McGinnis [email protected]
Spell tweaks/improvements, new monsterdrops
- Set it so that air talisman's effect only works on summoned monsters, using the new ability to filter by monster ID. This eliminates having it apply the effect to NPCs.
- Set it so that Consecrate has added effects when you hit specific monsters with it. The Arcane Blessing version hits just about every nether, triffid, and fungal monster in vanilla, along with all four arcana boss monsters, with a dazing attack, along with massive damage to hostile summoned monsters.
- Also gave a lesser version of the effect to the Magic Sign version of Consecrate. The selection of monsters affected is narrower (no bosses, few advanced monsters), the total effect is weaker, and it doesn't affect summoned monsters at all.
- Finally changed Flame Armor and Frost Armor to more consistent-sounding names, Heat Ward and Cold Ward. Had to do it the stupid way via obsoleting the old spells and making new ones with the desired ID, because the spell list is by spell ID instead of something sane like spell NAME.
- Added drop additions for the new triffid monster, and the shock leech family.
Decouple service acquisition from request processing
At the outset of the proxy's life, we introduced a Router
. The router has a
few jobs:
- To identify a target for each request;
- To provision a service for that target; and
- To maintain a cache of services to be reused across connections.
- To evict "idle" services that have not been used, i.e. so that the cache does not leak defunct services.
Because the router cannot know which inner service will be used before the
request is received, the router's poll_ready
always returns ready,
meaning that it cannot exert backpressure to its callers.
So, in order to ensure that the router's inner services can be shared by an arbitrary number of callers---and to ensure that the inner service is driven to readiness---we had to add a buffering layer within the router. The buffer holds requests while the inner service is driven to readiness on a dedicated task.
But requests could remain queued indefinitely, so we introduced a "deadline" feature to the buffer so that the request could be stolen from the queue and failed if the request was not processed in a given amount of time.
And then we added Service Profiles with destination overrides; and with these features, a slew of new routers and buffers.
As we've diagnosed recent issue reports, it's become apparent that the system, as its grown organically, does not properly handle backpressure. This most frequently manifests as 503s when requests are timed out of buffers, though it is undoubtedly related to a plethora of other user-reported issues (like Service Discovery staleness).
Buffers are deployed to solve two problems: (1) Clonability and (2) Readiness.
Without getting too into the weeds of Rust's memory model or Tokio's execution model, clonability basically refers to the ability to share a single service across multiple tasks. For example, if an application initiates multiple connections to an outbound proxy, we don't want to create new routers/caches/load balancers for each connection. Instead, we want to share the cached load balancers across all of these connections, so we need to clone the cache into each connection's task. The buffer allows multiple tasks to send messages to a single service (via an Multi-producer Single Consumer queue).
In Tower, a service's readiness indicates its ability to process requests.
Before a request can be dispatched, the caller must invoke
Service::poll_ready
to ensure that the inner service is able to accept a
request. This is how backpressure works. If an HTTP server's inner service is
not ready, it won't attempt to read a request from the socket, and so the
remote client's write calls will block once the kernel buffers are full.
Backpressure magic, baby!
When we use buffers to "ensure readiness", we are effectively disabling backpressure. We are signing up to handle requests and have to deal with timeouts, etc. We should only do this in rare and exceptional circumstances. But, as discussed above, our current routing strategy explicitly requires that we do not exert backpressure: routers must always be ready; and so must each inner service. Otherwise requests are dropped on the floor. So... not great.
If an inner service does not become ready in a timely fashion, requests can get "stucK" in the buffer. If a caller cancels the request (i.e. by dropping the response future), we don't have any means to eagerly evict the request from the buffer. We've taken pains to be able to "steal" requests from the buffer after a timeout; but this behavior has proven to be complex, imperfect, and difficult to diagnose/explain.
As discussed above, routers do a few jobs; and we currently use routers for a few different things.
The primary use is when we receive a request and want to send it through a service that is configured by the control plane. We don't want to query the control plane for each request, and so we want to cache a service that holds the proper configuration for the target service. Also, as mentioned above, these services need to be garbage collected once they are no longer in use, otherwise the proxy is prone to consume memory without bounds.
Routers are used somewhat differently in the context of Service profiles, though. Service profile routing is substantially different from destination routing:
- All routes are known a priori, and hence do not need to be discovered for each request;
- Because all routes are provided by the control plane, garbage collection is unnecessary and undesirable.
- Because all routes in a service profile operate over the same inner service(s), and because none of the layers in service profile routes can actually implement backpressure, there's no reason a service profile router really needs to guarantee readiness.
The concrete destination router similarly abuses routers, and therefore busts our ability to exert backpressure.
- We only care about clonability. We should ~never synthesize readiness in the data path.
- We need a simpler mechanism for bounding the time a request remains in the proxy before being dispatched to a client.
- The routing layer is not a one-size-fits-all solution. Be wary the siren song of code reuse.
Armed with this fresh knowledge, I've refactored the proxy stacks to (1) eliminate all buffer layers from the data path and (2) enforce a single service acquisition timeout that bounds the amount of time a request waits in the proxy to be dispatched to a client.
Assuming that we can relax the readiness constraints, what size should the buffers be to support clonability? If we used a buffer with only a single slot, for instance, we could limit the how many requests can get stuck in a buffer (to 1), but the problem would remain. We can't create a buffer with a capacity of zero... but if we could, we would have a Mutex. So why not just use one of those?
My proposed change replaces use of a Buffer
with a new (clonable) Lock
layer. Lock:poll_ready
first acquires a lock guarnteeing exclusive access
to the inner service before the service is polled to ready. Then, the locked
state is consumed and dropped as a request is dispatched on this service,
permitting other calls to obtain exclusive access to the service.
The one important subtlety here is that we need to be careful about services
that invoke poll_ready
before polling another source (like the socket) for
a request. In these cases, the service can hold the lock on the inner
service, preventing other services from being stalled.
Services that are in this situation may either clone the service to drop the
locked state; or a Oneshot
layer can be used to push backpressure into the
service's response (i.e. so that poll_ready is not invoked until the request
is materialized).
This is all to say that using a lock comes with some complexity. We have to be careful about how we use services that may contain a lock, otherwise tasks may be starved. This seems to be an acceptable tradeoff, though we'll need to find ways to detect/test improper access patterns.
Fundamentally, we need to do routing without requiring that all routes are ready; and we need to be able to limit the amount of time a request spends waiting for a ready service.
In order to accomplish this, we need to formalize two distinct "phases" of
proxying. The first phase is service acquisition, which can be loosely
described by the tower::MakeService
API: we first (asynchronously) build a
service for a target type so that the request can be dispatched to the target
service. We can set a service acquisition timeout on the future that
acquires the service (i.e. without setting a response timeout).
This is all accomplished by decoupling the routing from caching. Routing
selects which target to use. The cache implements tower::MakeService
,
returning a service for each target. With this decoupling, we can insert
timeout layers within the top-level logical router to ensure that service
acquisition is bounded.
With this new strategy, the Tokio runtime becomes the buffer. We can bound the total number of requests in flight to constrain this buffer; but this means that backpressure, cancelation, etc can be achieved more naturally.
Look, this change is huge. A significant number of superficial changes are co-mingled here. On the upside, compile time has been reduced substantially, and some organically diverging idioms have been consolidated.
- The proxy stacks;
- The lock layer;
- The caching layer;
- The routing layer;
- The profile router;
Beyond this, I've endeavored to limit PhantomData uses, especially in layers. They make it so the stack cannot be reused and, I believe, negatively impact compile times. In general, I've tried to remove unnecesscary type constraints. They make the system more resistant to change. Such changes include:
-
The
router::Make
type has been renamed tostack::NewService
--Make
andMakeService
were too close for comfort. -
Many layers now implement both
MakeService
(viaService
) andNewService
-
Stack
types are now generally namedMakeFoo
orNewFoo
; -
Types in the retry module have been renamed (so that the Service could be called Retry, basically).
-
The profile router is just totally new.
-
Probably other things, too. It's been a trip.