he/him

Alts (mostly for modding)

@sga013@lemmy.world

(Earlier also had @sga@lemmy.world for a year before I switched to @sga@lemmings.world, now trying piefed)

  • 42 Posts
  • 379 Comments
Joined 1 year ago
cake
Cake day: March 14th, 2025

help-circle
  • it does. I have snap shotting set for every hour, so every hour my file system creates a copy my main canonical file tree, and if some files changed in that hour, other than those files, all files are mapped to canonical file entries (same block data). for changed files, it points to their original blocks, so essentially changed files have copies. now you can write a command to delete certain amount of old backups, or oldest or however many, and there are multiple graphical implementations as well.

    some example of snap shotting file systems are zfs and btrfs. in linux latter is better supported in general. zfs is a bsd project which has a openzfs implementation for linux and many distros support it too.



  • sga@piefed.socialtoMemes@sopuli.xyzEverytime
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    basically any amount of energy conversion is lossy - a loose fabric stretching and pulling on frame in a “wider fashion”, and any deformation of frame will require a lot energy - all subracting from your fall. and fame up in your tushie is at worst a life long pain in sitting or in bowel moments, a direct fall is iinsta death from basically anything above, lets say, 10th floor (pulled number out of thin air, but assuming a roughly flat contact, and human not falling like a diver reducing the direct load o spine(by crushing their arms))










  • editor does not matter for practical purposes (in my naive tests, helix is within +/- 5% of vim/nvim on most files (in terms of memory usage), i do not use any lsps). i generally do not use lsps (too much feedback for me), but even if i would, i get like 1-2% consstant cpu usage while working, whereas 0% without any lsp (averaged on 30 sec intervals).

    while compiling, you have 2 options - just for testing, you can run debug build and see if it works, and once you get something reasonable, try release build, it should be feature wise practically same, but faster/more optimised (cargo buid vs cargo build --release).

    in terms of crates, try to to not use many, i just try to not import if i can write a shitty wrap myself, but if you include something, try to to see cargo tree for heaviest crate, and try to make tree leaner. also, try to use native rust libs vs wrappers around system ones (in my experience, any wrapper around sys stuff uses make/cmake stuff and gets slower).

    now these are all things without changing anything about runtime. if you are willing to gain more performance by trading storage space, you should do 2 things - use sccache and shared system target dir.

    sccache (https://github.com/mozilla/sccache) is a wrapper around compiler, and essentially, you cache build files. if file hash remains same, and if you have same rust toolchain installed (it has not updated in breaking ways), it will reuse the earlier build files. this will often help by reusing something like 50-70% of crates which stay same (even across project).

    after installing, you just go to cargo dir ($CARGO_HOME) and editting config.toml

    
    [build]  
    rustc-wrapper = "sccache"  
    target-dir = "<something>/cargo/target"  
    
    [profile.release]  
    lto = true  
    strip = true  # Automatically strip symbols from the binary  
    
    [profile.release.build-override]  
    opt-level = 3  
    codegen-units = 16  
    
    [target.'cfg(target_os = "linux")']  
    # linker = "wild"  
    # rustflags = ["-Clink-arg=-melf_x86_64"]  
    linker = "clang"  
    rustflags = ["-Clink-arg=--ld-path=wild", "-Ctarget-cpu=native"]  
    
    

    target-dir = "<something>/cargo/target" makes it such that instead of each rust project having a separate target dir, you have same target dir for all projects. It can lead to some problems (essentially only 1 cargo compile can work at a time, and some more, but likely do not want to compile multiple projects together anyway as each crate is compiled in parallel as long as that is possible). it repeats a bit of what sccache is doing (reusing build files), but this makes it such that if same version of crate is used elsewhere, it will not even require a recompile (which sccache would have made faster anyway), but this way storage use is reduced as well.

    other than that, you may see i have done added option to perform lto and strip on release builds (will make compile times even longer, but you can get a bit more performance out). I have also changed linker (default is gcc), and i use wild as linker (and wild currently requires clang on x86 stuff), it can also be a tiny bit faster. try to see if you need it or not, as lto is no silver bullet (it can reduce performance, but even in worst cases it is within 2-5% of without, but in best cases it can be more). and just generally check config params for cargo debug and release profiles (play around codegen units, i think default is a higher).




  • for cookies, you can try to open devtools, and then go to network tab, and there find the pdf file, and then right click, and you will find an option something in lines of ‘copy as/for cURL’, copy that, and paste somewhere. repeat exercise for some other file. this should give you some pattern as for how to make a query. it most likely just needs a bearerauth/token in header cookie, or something alike that.




  • try something in lines of

    wget -r -np -k -p "website to archive recursive download"  
    

    may work, but in case it does not, i would download the the page html, and then filter out all pdf links (some regex or grep magic), and then just give that list to wget or some other file downloader.

    if you can give the url, we can get a bit more specific.






  • not my domain, so i looked the title online (the original article is in nature, and not open access, and currently not in uni, so can not access through uni wifi)

    here is a theoretical version establishing physics for this effect - https://arxiv.org/pdf/2505.23083v1

    I am also not reading the linked article, as it is too flowery for me.

    so here is a tl;dr - if you know seebeck effect, this should be somewhat easy. seebeck effect is a effect where if there is a temperature gradient in a material, electricity can be generated. i will not go on about why that happens, but as a statistical argument, just keep in mind that as things are heated, they jiggle (very specific physics term, definitely not me stupidifying oscillations). if it is a “bond” between two atoms (aptly named atomic bonds), we consider a quantisation (fancy way to put number to how strong the vibration is, there is more to it, but not for now) of these oscillations as phonons. another thing is that in materials, these bond are often arranged in some special manners. for most materials, these arrangements are periodic lattices, (think junglee gym bars or rubik cube or some other periodic arrangement). in these materials, phonons can often transfer in different modes, always trapped by the ends. in some materials, these bonds can form helices, where phonons instead of going in straight line, will travel across the helix. if you know what angular momentum is then great, if not, think something with some “speed” going in circles. in that case it will have some angular momentum along the axis of that circle. coming back to main topic, here we have some phonon going across helix, having some angular momentum. now essentially this motion of phonon can create spin current. this requires us to go into separate tangent, abou what spin is, which is well hard to explain. in most materials, there are 2 types of electrons, and we just name these 2 spins up and down (and it has practically nothing to do with up or down directions). as to why there are only 2, is a really big topic we are not going into. but roughly, it is because of nature of material. in non magnetic materials, they behave same, but in magnetic materials, they do not. in some other words, you can say magnetic materials are magnetic because these 2 spins behave differently in these materials. in normal current, we have electrons going from 1 direction to another (kinda, but that is tangent to tangent, not going there). in spin current, these 2 electrons flow in opposite directions. since both are electrons, there is no charge difference created, a spin potential is created. this tudy showed that in non magnetic materials (tungsten and titanium), you could generate spin currents by “injecting” a angular momentum from quartz crystal phonon. if yo have ever heard of angular momentum conservation, this is a consequence of that, as spin current is a kind of angular momentum.

    as to why this could be special, spintronics (the name for using electron spin instead of charge for generating currents and making devices) requires lower power than electronics. one of the problems was that you required special magnetic materials, this is a demonstration without magnetic materials.

    in my physics world, this is big (in a scale of 1 to 10, 10 being theory of everything done, 1 being boring desk work - this is 5-7 - very big in spintronics, and reasonably big electronics), but to someone outside -not that big, like for decade(s). we made first transistors in 50s and 60s, an reasnable electronic devices (the semiconductor chips) by 70s and 80s. we made first spin transistors in 00-10s so i guess another 10 or so years before we see some industry level production.