dualmindblade [he/him]

  • 6 Posts
  • 31 Comments
Joined 4 years ago
cake
Cake day: September 21st, 2020

help-circle
  • Had them more than a decade ago, pros couldn’t get rid of them. All I had to do was put all my belongings in the garage, heat that up to 140 degrees F for about 12 hours, dust the entire house including inside the walls with diatomaceous earth, and move out for 3 months. Easy peasey.

    Btw they can’t climb smooth surfaces, it’s actually practical to just put protectors/detectors on all the legs of your beds and furniture and make sure there’s nothing touching the walls or hanging to the floor. I did that for a few years out of paranoia actually and always for the first few months after moving into a new place. I still wake up sometimes and am compelled to turn on the light and check my sheets. They need human blood to survive, unfortunately they can live for quite some time without feeding.



  • Very well, I’ll take that as a sort of compliment lol.

    So I guess I start where I always do, do you think a machine, in principal, has the capability to be intelligent and/or creative? If not, I really don’t have any counter, I suppose I’d be curious as to why though. Like I admit it’s possible there’s something non-physical or non-mechanistic driving our bodies that’s unknown to science. I find that very few hold this hard line opinion though, assuming you are also in that category…

    So if that’s correct, what is it about the current paradigm of machine learning that you think is missing? Is it embodiment, is it the simplicity of artificial neurons compared to biological ones, something specific about the transformer architecture, a combination of these, or something else I haven’t thought of?

    And hopefully it goes without saying, I don’t think o1-preview is a human level AGI, I merely believe that we’re getting there quite soon and without too many new architectural innovations, possibly just one or two, and none of them will be particularly groundbreaking, it’s fairly obvious what the next couple of steps will be as it was that MCTS + LLM was the next step 3 years ago.




  • it constructs a concept in a more abstract way then progressively finds a way to put it into words; I know that arguably that’s what it’s doing currently,

    Correct!

    but the fact that it does it separately for each token means it’s not constructing any kind of abstraction

    No!!! You simply cannot make judgements like this based on vague ideas like “autocomplete on steroids” or “stochastic parrot”, these were good for conceptualizing GPT-2, maybe. It’s actually very inefficient, but, by re-reading what it has previously written (plus one token) it’s actually acting sort of like an RNN. In fact we know theoretically that with simlified attention models the two architectures are mathematically equivalent.

    Let me put it like this. Suppose you had the ability to summon a great novelist as they were at some particular point in their life, pull them from one exact moment in the past, and to do this as many times as you liked. You put a gun to their head, or perhaps offer them alcohol and cocaine, to start writing a novel. The moment they finish the first word, you shoot them in the head and summon the same version again. “Look I’ve got a great first word for a novel, and if you can turn it into a good paragraph I’ll give you this bottle of gin and a gram of cocaine!”. They think for a moment and begin to put down more words, but again you shoot them after word two. Rinse/repeat until a novel is formed. It takes a good while but eventually you’ve got yourself a first draft. You may also have them refine the novel using the same technique, also you may want to give them some of the drugs and alcohol before hand to improve their writing and allow them to put aside the fact that they’ve been summoned to the future by a sorcerer. Now I ask you, is there any theoretical reason why this novel wouldn’t be any good? Is the essence of it somehow different than any other novel, can we judge it as not being real art or creativity?


  • The model was trained on self-play, it’s unclear exactly how, whether via regular chain-of-thought reasoning or some kind of MCTS scheme. It no longer relies only on ideas from internet data, that’s where it started from. It can learn from mistakes it made during training, from making lucky guesses, etc. Now it’s way better as solving math problems, programming, and writing comedy. At what point do we call what it’s doing reasoning? Just like, never, because it’s a computer? Or you object to the transformer architecture specifically, what?


  • Look I just don’t think that’s a helpful mindset, there are still good people who don’t understand and or don’t agree. They probably treat these videos and news as misinformation or just tell themselves it’s a necessary evil in a fight against an even greater evil, remember they think that gazans are mostly united in their hatred against Jews and would like to see them eliminated woldwide, not that this ought to make much of a difference in the calulation of whether or not to kill an entire culture with no power to fight back… but I have to admit that if I believed this to be true I would be less invested emotionally. Or they just haven’t seen the information at all, that’s probably the bulk of it actually.

    It’s incredibly hard to let go of an ideology, and liberalism and to a lesser extent democratism are quite powerful and seductive from the standpoint of someone embedded in American society who hasn’t seen the light when it comes to the whole America being evil thing. As emaciated and I’ll defined as our culture it, it is still a culture nonetheless, or it appears to be at least, it’s something many people cherish and will want to protect, and when an inclination is so deeply ingrained it creates enormous potential for self deception and willful ignorance.

    So do I have trouble not hating them, yes, but on the other hand I also see where they’re coming from, their experiences have likely been quite different from my own, and that means there may be potential for conversion via the inception of new experience You have to admit it’s already happening to an extent, there are a lot of libs out there who have been partially black pilled by this whole genocide thing, which they still see as merely another war, albeit a particularly gruesome one. They’re on the right path. Are they moving too slowly, more disinterested in completing the journey of enlightenment than they need to be, perhaps not even aware they haven’t reached the end? Yes. Will most of them never make it? Probably. But some will and I happen to believe that if you are potentially a future ally you should not be treated as an enemy, at least in the absence of other mitigating circumstances.

    What lies between us and the truth of this situation is a window crystal clear to you, to me, to some still in the “progressive” category who keeps themselves well informed. What we sometimes forget is that it is our ideology and education and personal inclinations give us X-ray vision, we see certainly things with such clarity that we forget the window is there at all, and that it is opaque to certain frequencies of light. But that type of vision is not yet available to all, some may have ways of devoping it to various extents, but they all will involve some level of pain, guilt, loss, grief, and sheer effort.

    The effort, we’re all quite tired, we need our sleepy time, and yet some person, who my friends tell me I should hate or ignore, is telling me to get out of bed and embark on some spirit quest which, if my friends knew I had gone along, could jeopardize our relationships? Oh and this person btw appears to be even more tired and quite miserable compared to me, and filled with anger? No thanks, I’ll just get a good night’s sleep instead. After all I have work in the morning.

    We have the majority very roughly on our side, but we need even more and we need to smooth out that roughness, that is part of our job right now. We need to recognize when this is and isn’t feasible and accept that polishing a surface is sometimes a many step process, especially if that surface is particularly hard or too brittle, but if we are too picky about the tools we are willing to apply, the raw materials we’re willing to work with, we are using our labor inefficiently and we end up with fewer and lower quality products which might otherwise have been unique items of great use in enacting our plans.















  • Okay just thinking out loud here, everything I’ve seen so far works as you described, the training data is taken either from reality or generated by a traditional solver. I’m not sure this is a fundamental limitation though, you should be able to create a loss function that asks “how closely does the output satisfy the PDE?” rather than “how closely does the output match the data generated by my solver?”. But anyway you wouldn’t need to improve on the accuracy of the most accurate methods to get something useful, if the NN is super fast and has acceptable accuracy you can use that to do the bulk of your optimization and then use a regular simulation and or reality to check the result and possibly do some fine-tuning.


  • So this is way way outside my expertise, grain of salt and whatnot… Wouldn’t the error in most CFD simulations, regardless of technique, quickly explode to its maximum due to turbulence? Like if you’re designing a stirring rotor for a mixing vessel you’re optimizing for the state of the system at T+ [quite a bit of time], I don’t believe hand crafter approximations can give you any guarantees here. And I get the objection about training time, but I think the ultimate goal is to train a NN on a bunch of physical systems with different boundary conditions and fluid properties so you only need to train once and then you can just do inference forevermore.