Skip to content

Latest commit

 

History

History
730 lines (528 loc) · 32.1 KB

2021-06-24.md

File metadata and controls

730 lines (528 loc) · 32.1 KB

< 2021-06-24 >

2,927,324 events, 1,525,898 push events, 2,471,974 commit messages, 183,086,346 characters

Thursday 2021-06-24 00:28:53 by MuitosMezaninos

Fuck you 9 year old fucking bullshit code of a shit fucking game fuck you and shove this eldritch bullshit up your ass


Thursday 2021-06-24 03:22:40 by Annie Kellner

holy fucking shit it worked. was able to create a plot of the historical tmax by expanding the memory limit in R


Thursday 2021-06-24 06:39:24 by kesarcontrol123

Kesar Control Leading Manufacturer of Stability Chamber

Kesar control systems, manufacturer of pharmaceutical and scientific equipment established in the year of 2007 are here with the best people and the best manufacturing process. To conduct any sort of experiment one of the most important things is machinery, here we provide you with high-quality and durable machines which work smoothly and along with that we give the most useful features. Pharmaceutical and scientific experiments need a wide variety of machines to produce, store and keep experiments stable for a long period of time so it is very much important that you've to choose wisely in this department. Stability Chamber, widely used by pharmaceutical companies, is highly in demand after the onset of the COVID-19 Pandemic. Medicines, drugs and especially vaccines need to be stored in monitored conditions. This includes monitoring of the temperature, light exposure, and moisture, amongst other conditions. Cold chamber requires a range of cooling and heating technology to easily navigate through different temperatures based on specific requirements quickly and easily. For pharmaceutical and scientific requirements of storage of light-sensitive articles, Photostability Chamber is developed using state of the art technology to ensure uniform light distribution and convenient adjustment of exposure. Walk in stability chamber is also custom-built using exceptional technology to maximize efficiency in storage by stabilizing large-scale storage requirements. Kesar Control Systems is a trusted name when it comes to BOD incubator and Laboratory Incubator manufacturing.The various uses include but are not limited to vaccine preservation, life cycle testing, shelf life studies, e.t.c.. Further uses of BOD incubator extend to microorganisms cultivation for biological studies, refrigerated storage for botany, sewage and water pollution. We envision to be one of the most efficient manufacturers of Temperature and Humidity base equipment by adopting advanced techniques for enhancing quality and constantly updating our better to best. We manufacture all our equipment as per GMP / ICH guidelines and also confirm MCA & US FDA requirements, so you don't need to worry about the quality of the product as we fill all the boxes of quality assurance. You can visit us at https://www.kesarcontrol.com/ to contact us, to place an order and to get more information about what we do and how we do.

For more details about our product visit:  https://www.kesarcontrol.com/stability-chamber.php


Thursday 2021-06-24 07:11:43 by Nithya B

CHECKING ALPHABETS

Checking alphabets Lucy Pevensie and her brothers Peter and Edmund found themselves in Narnia,the land of magic during the World War III. Narnia was completely filled with gentle people and also it is where the trees sing, fauns dance and animals talk. Being from England, Lucy wanted to teach the kids , English language. She started teaching them alphabets but then she remembered that she might go back to London and felt sad. Edmund and Peter discussed with each other and suggested an idea to Lucy to come up with a program so that the kids can learn on their own when she was not there. Can you help Lucy to write a C++program to check whether the given character is a vowel or Consonant. INPUT & OUTPUT FORMAT:

The input consists of a character.

The output should be any one of the below-given strings.

“Vowel” or “Consonant” or “Not an alphabet”.

SAMPLE INPUT & OUTPUT:

e

Vowel

z Consonant #coding #include using namespace std; int main() { char n; cin >>n; if((n>='a'&&n<='z')||(n>='A'&&n<='Z')) { switch(n) { case 'a': case 'e': case 'i': case 'o': case 'u': case 'A': case 'E': case 'I': case 'O': case 'U': cout<<"Vowel"; break; default: cout<<"Consonant"; break; } } else cout<<"Not an alphabet"; return 0; }


Thursday 2021-06-24 10:25:26 by Marko Grdinić

"8:50am. Let me chill a bit.

https://www.reddit.com/r/MachineLearning/comments/o6j8qf/r_aliasfree_gan/

This seems interesting.

https://nvlabs.github.io/alias-free-gan/

The image quality is really impressive.

9:05am. Right now I am reading the paper. I had time to think about what I want to try tackling the Holdem training next.

What I am going to do is implement fast averaging on top of the slow one I am already doing and have the network train against the fast average. For all I know this might work even for GANs. If not, I always have the duality gap method to fall back on.

9:35am. Ok, focus me. Let me read that GAN paper, a few chapter of Hagure Idol and then I will implement my idea and let it run for a while.

10:05am. Done with the above. I do not get a thing of that paper. But it is quite impressive.

For the time being let me just do my own thing. GANs are something I will tackle when the hardware costs go down.

    favg_iter = 100
    ...
    def make_avg(max_t):
        def avg_fn(avg_p, p, t): return avg_p + (p - avg_p) / min(max_t, t + 1)
        return AveragedModel(value,avg_fn=avg_fn), AveragedModel(policy,avg_fn=avg_fn), AveragedModel(head,avg_fn=avg_fn)

    valuea, policya, heada = make_avg(iter_chk)
    valuefa, policyfa, headfa = make_avg(favg_iter)

Let me do this.

10:20am.

def create_nn_agent(iter_train,iter_avg,iter_chk,vs_self,vs_one,neural,uniform_player): # self play NN
    assert ((iter_train + iter_avg) % iter_chk == 0)
    batch_size = 2 ** 10
    favg_iter = 100
    head_decay = 1.0 - 2 ** -2

    value, policy, head = neural_create_model(neural.size)
    opt = SignSGD([
        {'params': value.parameters(), 'lr': 2 ** -6},
        {'params': policy.parameters()}
        ],{'lr': 2 ** -8})

    def make_avg(max_t):
        def avg_fn(avg_p, p, t): return avg_p + (p - avg_p) / min(max_t, t + 1)
        return AveragedModel(value,avg_fn=avg_fn), AveragedModel(policy,avg_fn=avg_fn), AveragedModel(head,avg_fn=avg_fn)

    valuea, policya, heada = make_avg(iter_chk) # The slow average
    valuefa, policyfa, headfa = make_avg(favg_iter) # The fast average

    def neural_player(value,policy,head,is_update_head=False,is_update_value=False,is_update_policy=False,epsilon=0.0):
        return neural.handler(partial(model_evaluate,value,policy,head,is_update_head,is_update_value,is_update_policy,epsilon))

    def run(is_avg=False):
        nonlocal head,heada
        opt.zero_grad(True)
        head.decay(head_decay)
        pl,plfa = neural_player(value,policy,head,True,True,True,2 ** -2), neural_player(valuefa,policyfa,headfa)
        vs_one(batch_size // 2,pl,plfa)
        vs_one(batch_size // 2,plfa,pl)
        # r = vs_one(batch_size,neural_player(True,True,True,2 ** -2),uniform_player)
        # logging.info(f"The reward vs the uniform is {r.mean()}")
        opt.step()
        valuefa.update_parameters(value); policyfa.update_parameters(policy); headfa.update_parameters(head)

        if is_avg: valuea.update_parameters(value); policya.update_parameters(policy); heada.update_parameters(head)

    logging.info("** TRAINING START **")
    t1 = time.perf_counter()

    for i in range(1,iter_train+iter_avg+1):
        is_avg = iter_train < i
        run(is_avg)
        if i % iter_chk == 0:
            with open(f"dump/nn_agent_{i}.nnreg",'wb') as f: torch.save((value,policy,head),f)
            if is_avg:
                with open(f"dump/nn_agent_{i}.nnavg",'wb') as f: torch.save((valuea.module,policya.module,heada.module),f)
            logging.info(f'Checkpoint {i}')

    logging.info("** TRAINING DONE **")
    t2 = time.perf_counter()
    logging.info(f"TIME ELAPSED: {t2-t1:0.4f}")

if __name__ == '__main__':
    import numpy as np
    import pyximport
    pyximport.install(language_level=3,setup_args={"include_dirs":np.get_include()})
    from create_args import main
    args = main()['train']

    log_path = 'dump/training.log'
    logging.basicConfig(
        filename=log_path,
        level=logging.DEBUG,
        datefmt='%m/%d/%Y %I:%M:%S %p',
        format='%(asctime)s %(message)s'
        )

    print("Running...")
    print(f"The details of training are in: {log_path}")
    create_nn_agent(2_000 // 4,30_000 // 4,2_000 // 4,**args)
    print("Done.")

The idea I had yesterday was to make a copy of the net at the present step and train the agent against that, but during the night it occurred to me that I could just have it train against its average. This should be an interesting experiment. Let me run it for a quarter of the iterations that I have yesterday.

AttributeError: 'AveragedModel' object has no attribute 'head'

Ah whops. I should extract the module.

10:25am.

    valuea, policya, heada = make_avg(iter_chk) # The slow average
    valuefa, policyfa, headfa = make_avg(favg_iter) # The fast average
    valuefa.requires_grad_

Damn, I should really be turning off the requires_grad_ for this. I'll have to check out the state of that.

import torch
import torch.nn
from torch.optim.swa_utils import AveragedModel

a = torch.nn.Linear(2,2,False)
b = AveragedModel(a)
b.requires_grad_(False)
a.weight.requires_grad
b.module.weight.requires_grad

There is no conflict. Turning off the grad on b, does not do it on a. And the avg model does have it on by default.

10:30am.

valuefa.requires_grad_(False), policyfa.requires_grad_(False), headfa.requires_grad_(False)

No, it is not important forget this.

Ok, it already played 1k hands. Let me try it out.

...No, it just keeps raising like a moron again.

Let me try the 1.5k.

Oh it raise do 15 instead of half stack! This seems promising.

10:40am. It is still quite awful, but at least it is developing some sort of diversity in its actions. It still does not know how to fold though. Let me try the 2k player.

10:45am. 2k is trash. It has not moved at all from its policy. How about 2.5k?

Oh it raise to 5. That is different.

It still did not learn to fold.

10:50am. Let me try the 3.5k.

It has gone back to making huge raises. It seems the primary challenge today will be figuring out how to get it to fold.

Enough. I can't bear this anymore. I'll abort the training. If it cannot learn to fold I think the most likely culprit is the value function.

head_decay = 1.0 - 2 ** -2

Maybe this decay is way too high.

head_decay = 1.0 - 2 ** -10

Let me try this. With Leduc finding the right value was no problem, but with what I have now, it might be forgetting too quickly.

I at least want to see it learning to fold trash hands.

Maybe I should go back to Leduc and see how it does when there is no decay.

Actually, let me try that out here. I'll turn off head decay just to see how it does.

I'd bet it would do just fine without decay. But too much decay could very well hurt it.

11am. This is not so bad. Training the agent and playing against it is something I should have been doing from the start.

This is the kind of work, I've dreamed about.

Let me try the 500 agent.

11:20am. I've aborted it again.

So far my theories have all turned out to be wrong. I thought it is an aggrodonk because of optimization vagaries, but now I am not sure.

Instead of doing what I have been doing so far, let me ditch playing against the average and increase the batch size by a factor of 2 ** 7. Combined with signSGD maybe the variance of the gradients is just too high.

11:25am.

def create_nn_agent(iter_train,iter_avg,iter_chk,vs_self,vs_one,neural,uniform_player): # self play NN
    assert ((iter_train + iter_avg) % iter_chk == 0)
    batch_size = 2 ** 17
    favg_iter = 100
    head_decay = 1.0 - 2 ** -2

    value, policy, head = neural_create_model(neural.size)
    opt = SignSGD([
        {'params': value.parameters(), 'lr': 2 ** -6},
        {'params': policy.parameters()}
        ],{'lr': 2 ** -8})

    def make_avg(max_t):
        def avg_fn(avg_p, p, t): return avg_p + (p - avg_p) / min(max_t, t + 1)
        return AveragedModel(value,avg_fn=avg_fn), AveragedModel(policy,avg_fn=avg_fn), AveragedModel(head,avg_fn=avg_fn)

    valuea, policya, heada = make_avg(iter_chk) # The slow average
    # valuefa, policyfa, headfa = make_avg(favg_iter) # The fast average

    def neural_player(value,policy,head,is_update_head=False,is_update_value=False,is_update_policy=False,epsilon=0.0):
        return neural.handler(partial(model_evaluate,value,policy,head,is_update_head,is_update_value,is_update_policy,epsilon))

    def run(is_avg=False):
        nonlocal head,heada
        opt.zero_grad(True)
        head.decay(head_decay)
        pl = neural_player(value,policy,head,True,True,True,2 ** -2)
        vs_self(batch_size,pl)
        # plfa = neural_player(valuefa.module,policyfa.module,headfa.module)
        # vs_one(batch_size // 2,pl,plfa)
        # vs_one(batch_size // 2,plfa,pl)
        opt.step()
        # valuefa.update_parameters(value); policyfa.update_parameters(policy); headfa.update_parameters(head)

        if is_avg: valuea.update_parameters(value); policya.update_parameters(policy); heada.update_parameters(head)

    logging.info("** TRAINING START **")
    t1 = time.perf_counter()

    for i in range(1,iter_train+iter_avg+1):
        is_avg = iter_train < i
        run(is_avg)
        if i % iter_chk == 0:
            with open(f"dump/nn_agent_{i}.nnreg",'wb') as f: torch.save((value,policy,head),f)
            if is_avg:
                with open(f"dump/nn_agent_{i}.nnavg",'wb') as f: torch.save((valuea.module,policya.module,heada.module),f)
            logging.info(f'Checkpoint {i}')

    logging.info("** TRAINING DONE **")
    t2 = time.perf_counter()
    logging.info(f"TIME ELAPSED: {t2-t1:0.4f}")

Let me try doing this.

    sample_probs += action_mask_inv / (action_mask_inv.sum(-1,keepdim=True) * (1 / epsilon))
RuntimeError: CUDA out of memory. Tried to allocate 82.00 MiB (GPU 0; 4.00 GiB total capacity; 2.80 GiB already allocated; 6.73 MiB free; 2.82 GiB reserved in total by PyTorch)

No helping it. Let me lower the batch size.

for _ in range(2 ** 3): vs_self(batch_size,pl)

I've lowered the batch size to 2 ** 14. Let me try doing it like this.

11:40am. No forget this. Such a large batch size for a single step is just way too high.

12:25pm. I am thinking. Let me stop here for the morning."


Thursday 2021-06-24 13:37:23 by LaciaMemeFrame

throttling for media group fuck you durov mazafaka


Thursday 2021-06-24 13:47:28 by Daniel Vetter

dma-buf: Document dma-buf implicit fencing/resv fencing rules

Docs for struct dma_resv are fairly clear:

"A reservation object can have attached one exclusive fence (normally associated with write operations) or N shared fences (read operations)."

https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html#reservation-objects

Furthermore a review across all of upstream.

First of render drivers and how they set implicit fences:

  • nouveau follows this contract, see in validate_fini_no_ticket()

      	nouveau_bo_fence(nvbo, fence, !!b->write_domains);
    

    and that last boolean controls whether the exclusive or shared fence slot is used.

  • radeon follows this contract by setting

      p->relocs[i].tv.num_shared = !r->write_domain;
    

    in radeon_cs_parser_relocs(), which ensures that the call to ttm_eu_fence_buffer_objects() in radeon_cs_parser_fini() will do the right thing.

  • vmwgfx seems to follow this contract with the shotgun approach of always setting ttm_val_buf->num_shared = 0, which means ttm_eu_fence_buffer_objects() will only use the exclusive slot.

  • etnaviv follows this contract, as can be trivially seen by looking at submit_attach_object_fences()

  • i915 is a bit a convoluted maze with multiple paths leading to i915_vma_move_to_active(). Which sets the exclusive flag if EXEC_OBJECT_WRITE is set. This can either come as a buffer flag for softpin mode, or through the write_domain when using relocations. It follows this contract.

  • lima follows this contract, see lima_gem_submit() which sets the exclusive fence when the LIMA_SUBMIT_BO_WRITE flag is set for that bo

  • msm follows this contract, see msm_gpu_submit() which sets the exclusive flag when the MSM_SUBMIT_BO_WRITE is set for that buffer

  • panfrost follows this contract with the shotgun approach of just always setting the exclusive fence, see panfrost_attach_object_fences(). Benefits of a single engine I guess

  • v3d follows this contract with the same shotgun approach in v3d_attach_fences_and_unlock_reservation(), but it has at least an XXX comment that maybe this should be improved

  • v4c uses the same shotgun approach of always setting an exclusive fence, see vc4_update_bo_seqnos()

  • vgem also follows this contract, see vgem_fence_attach_ioctl() and the VGEM_FENCE_WRITE. This is used in some igts to validate prime sharing with i915.ko without the need of a 2nd gpu

  • vritio follows this contract again with the shotgun approach of always setting an exclusive fence, see virtio_gpu_array_add_fence()

This covers the setting of the exclusive fences when writing.

Synchronizing against the exclusive fence is a lot more tricky, and I only spot checked a few:

  • i915 does it, with the optional EXEC_OBJECT_ASYNC to skip all implicit dependencies (which is used by vulkan)

  • etnaviv does this. Implicit dependencies are collected in submit_fence_sync(), again with an opt-out flag ETNA_SUBMIT_NO_IMPLICIT. These are then picked up in etnaviv_sched_dependency which is the drm_sched_backend_ops->dependency callback.

  • v4c seems to not do much here, maybe gets away with it by not having a scheduler and only a single engine. Since all newer broadcom chips than the OG vc4 use v3d for rendering, which follows this contract, the impact of this issue is fairly small.

  • v3d does this using the drm_gem_fence_array_add_implicit() helper, which then it's drm_sched_backend_ops->dependency callback v3d_job_dependency() picks up.

  • panfrost is nice here and tracks the implicit fences in panfrost_job->implicit_fences, which again the drm_sched_backend_ops->dependency callback panfrost_job_dependency() picks up. It is mildly questionable though since it only picks up exclusive fences in panfrost_acquire_object_fences(), but not buggy in practice because it also always sets the exclusive fence. It should pick up both sets of fences, just in case there's ever going to be a 2nd gpu in a SoC with a mali gpu. Or maybe a mali SoC with a pcie port and a real gpu, which might actually happen eventually. A bug, but easy to fix. Should probably use the drm_gem_fence_array_add_implicit() helper.

  • lima is nice an easy, uses drm_gem_fence_array_add_implicit() and the same schema as v3d.

  • msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT, but because it doesn't use the drm/scheduler it handles fences from the wrong context with a synchronous dma_fence_wait. See submit_fence_sync() leading to msm_gem_sync_object(). Investing into a scheduler might be a good idea.

  • all the remaining drivers are ttm based, where I hope they do appropriately obey implicit fences already. I didn't do the full audit there because a) not follow the contract would confuse ttm quite well and b) reading non-standard scheduler and submit code which isn't based on drm/scheduler is a pain.

Onwards to the display side.

  • Any driver using the drm_gem_plane_helper_prepare_fb() helper will correctly. Overwhelmingly most drivers get this right, except a few totally dont. I'll follow up with a patch to make this the default and avoid a bunch of bugs.

  • I didn't audit the ttm drivers, but given that dma_resv started there I hope they get this right.

In conclusion this IS the contract, both as documented and overwhelmingly implemented, specically as implemented by all render drivers except amdgpu.

Amdgpu tried to fix this already in

commit 049aca4363d8af87cab8d53de5401602db3b9999 Author: Christian König [email protected] Date: Wed Sep 19 16:54:35 2018 +0200

drm/amdgpu: fix using shared fence for exported BOs v2

but this fix falls short on a number of areas:

  • It's racy, by the time the buffer is shared it might be too late. To make sure there's definitely never a problem we need to set the fences correctly for any buffer that's potentially exportable.

  • It's breaking uapi, dma-buf fds support poll() and differentitiate between, which was introduced in

    commit 9b495a5887994a6d74d5c261d012083a92b94738 Author: Maarten Lankhorst [email protected] Date: Tue Jul 1 12:57:43 2014 +0200

      dma-buf: add poll support, v3
    
  • Christian König wants to nack new uapi building further on this dma_resv contract because it breaks amdgpu, quoting

    "Yeah, and that is exactly the reason why I will NAK this uAPI change.

    "This doesn't works for amdgpu at all for the reasons outlined above."

    https://lore.kernel.org/dri-devel/[email protected]/

    Rejecting new development because your own driver is broken and violates established cross driver contracts and uapi is really not how upstream works.

Now this patch will have a severe performance impact on anything that runs on multiple engines. So we can't just merge it outright, but need a bit a plan:

  • amdgpu needs a proper uapi for handling implicit fencing. The funny thing is that to do it correctly, implicit fencing must be treated as a very strange IPC mechanism for transporting fences, where both setting the fence and dependency intercepts must be handled explicitly. Current best practices is a per-bo flag to indicate writes, and a per-bo flag to to skip implicit fencing in the CS ioctl as a new chunk.

  • Since amdgpu has been shipping with broken behaviour we need an opt-out flag from the butchered implicit fencing model to enable the proper explicit implicit fencing model.

  • for kernel memory fences due to bo moves at least the i915 idea is to use ttm_bo->moving. amdgpu probably needs the same.

  • since the current p2p dma-buf interface assumes the kernel memory fence is in the exclusive dma_resv fence slot we need to add a new fence slot for kernel fences, which must never be ignored. Since currently only amdgpu supports this there's no real problem here yet, until amdgpu gains a NO_IMPLICIT CS flag.

  • New userspace needs to ship in enough desktop distros so that users wont notice the perf impact. I think we can ignore LTS distros who upgrade their kernels but not their mesa3d snapshot.

  • Then when this is all in place we can merge this patch here.

What is not a solution to this problem here is trying to make the dma_resv rules in the kernel more clever. The fundamental issue here is that the amdgpu CS uapi is the least expressive one across all drivers (only equalled by panfrost, which has an actual excuse) by not allowing any userspace control over how implicit sync is conducted.

Until this is fixed it's completely pointless to make the kernel more clever to improve amdgpu, because all we're doing is papering over this uapi design issue. amdgpu needs to attain the status quo established by other drivers first, once that's achieved we can tackle the remaining issues in a consistent way across drivers.

v2: Bas pointed me at AMDGPU_GEM_CREATE_EXPLICIT_SYNC, which I entirely missed.

This is great because it means the amdgpu specific piece for proper implicit fence handling exists already, and that since a while. The only thing that's now missing is

  • fishing the implicit fences out of a shared object at the right time
  • setting the exclusive implicit fence slot at the right time.

Jason has a patch series to fill that gap with a bunch of generic ioctl on the dma-buf fd:

https://lore.kernel.org/dri-devel/[email protected]/

v3: Since Christian has fixed amdgpu now in

commit 8c505bdc9c8b955223b054e34a0be9c3d841cd20 (drm-misc/drm-misc-next) Author: Christian König [email protected] Date: Wed Jun 9 13:51:36 2021 +0200

drm/amdgpu: rework dma_resv handling v3

Use the audit covered in this commit message as the excuse to update the dma-buf docs around dma_buf.resv usage across drivers.

Since dynamic importers have different rules also hammer these in again while we're at it.

v4:

  • Add the missing "through the device" in the dynamic section that I overlooked.
  • Fix a kerneldoc markup mistake, the link didn't connect

v5:

  • A few s/should/must/ to make clear what must be done (if the driver does implicit sync) and what's more a maybe (Daniel Stone)
  • drop all the example api discussion, that needs to be expanded, clarified and put into a new chapter in drm-uapi.rst (Daniel Stone)

Cc: Daniel Stone [email protected] Acked-by: Daniel Stone [email protected] Reviewed-by: Dave Airlie [email protected] (v4) Reviewed-by: Christian König [email protected] (v3) Cc: [email protected] Cc: Bas Nieuwenhuizen [email protected] Cc: Dave Airlie [email protected] Cc: Rob Clark [email protected] Cc: Kristian H. Kristensen [email protected] Cc: Michel Dänzer [email protected] Cc: Daniel Stone [email protected] Cc: Sumit Semwal [email protected] Cc: "Christian König" [email protected] Cc: Alex Deucher [email protected] Cc: Daniel Vetter [email protected] Cc: Deepak R Varma [email protected] Cc: Chen Li [email protected] Cc: Kevin Wang [email protected] Cc: Dennis Li [email protected] Cc: Luben Tuikov [email protected] Cc: [email protected] Signed-off-by: Daniel Vetter [email protected] Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]


Thursday 2021-06-24 15:29:49 by Ray Griffin

packages/ffmpeg: Disable freetype

I wish I could work out why randomly, after nothing has been touched for days this just breaks configure and can't fing ft2 pkgconfig. It's all there. It worked 2 days ago. I've touched NOTHING in those 2 days. Happened a few times, fixed with deleting the whole mingw prefix but fuck you for wasting my time. Doesn't appear to break mpv or have any bad effects, text renders fine and I think these may be specefic to ffmpeg internal filters which I've disabled anyway.


Thursday 2021-06-24 15:52:31 by Jez Higgins

Pass matcher function into the matcher object

Rather than having a number of different matcher objects and getting caught up with passing dynamically allocated polymorphic objects around with std::unique_ptr, capture the behaviour ina little lambda function and let the magic of std::function handle it all for us.

The make_matcher factory function can build the appropriate function, sock into the matcher, and we can pass that around by value. Lovely.


Thursday 2021-06-24 16:37:46 by hypervis0r

Remove applink.c because fuck you. Link to libpthread


Thursday 2021-06-24 17:55:34 by Tk420634

Adds Scottish accent.

Adds a Scottish accent, it may be a bit too thick, but it's hilarious.

Example,

W'at t'e fuck did ye just fuckin' say about me, ye little bitc'? I'll 'ave ye know I 'raduated top o' my class in t'e Navy Seals, and I've been involved in numerous secret raids on Al-Quaeda, and I 'ave over 300 confirmed kills. I am trained in 'orilla warfare and I'm t'e top sniper in t'e entire US armed forces. ye are not'in' to me but just anot'er tar'et. I will wipe ye t'e fuck out wit' precision t'e likes o' w'ic' 'as never been seen before on t'is Eart', mark my fuckin' words. ye t'ink ye can 'et away wit' sayin' t'at s'it to me over t'e Internet? T'ink a'ain, fucker. As we speak I am contactin' my secret network o' spies across t'e USA and yer IP is bein' traced ri''t now so ye better prepare for t'e storm, ma''ot. T'e storm t'at wipes out t'e pat'etic little t'in' ye call yer life. You're fuckin' dead, kid. I can be anyw'ere, anytime, and I can kill ye in over seven 'undred ways, and t'at's just wit' my bare 'ands. Not only am I extensively trained in unarmed combat, but I 'ave access to t'e entire arsenal o' t'e United States Marine Corps and I will use it to its full extent to wipe yer miserable ass off t'e face o' t'e continent, ye little s'it. If only ye could 'ave known w'at un'oly retribution yer little "clever" comment was about to brin' down upon you, maybe ye would 'ave 'eld yer fuckin' ton'ue. But ye couldn't, ye didn't, and now you're payin' t'e price, ye 'oddamn idiot. I will s'it fury all over ye and ye will drown in it. You're fuckin' dead, kiddo.


Thursday 2021-06-24 19:44:24 by wimbiyoas

printk: Dont print init related logs

Due to some fuckups from ROM side now the userspace spams my dmesg with following logs so ignore all init messages once for all This is annoying as hell.

init: Could not find '[email protected]::ICrypto/default' for ctl.interface_start

Signed-off-by: wimbiyoas [email protected]


Thursday 2021-06-24 20:45:10 by Wilmer Adalid

Updates for: The little girl expects no declaration of tenderness from her doll. She loves it -- and that's all. It is thus that we should love. -- DeGourmont


Thursday 2021-06-24 21:26:34 by robcore

fuck you bam dmux. also i think we need download mode, and some other fixes


< 2021-06-24 >