@Neolectron@ThePrimeagen@vitimus1@t3dotgg Based on what I'm reading about it, its not that ESM is bad its that the transition was poorly standardized and there are now many ways to specify what is or isn't ESM and where different kinds of module sources are.
@Neolectron@ThePrimeagen@vitimus1@t3dotgg In JS land it seems that even somewhat poor standards are better than late standards; late means an ecosystem with a billion option sprawls quickly and its hard to contain later.
@Neolectron@ThePrimeagen@vitimus1@t3dotgg I would argue that not providing a framework to standardize module resolution and leaving it all for userland to resolve is also to blame.
@dan_abramov I agree its super exciting, but I'm also a little worried. Based on what I've read so far, it looks like the way we train / "fix" these things we're equally likely to end up with results that are more often correct as well as results for which we can't tell whether they're wrong
@unclebobmartin@jerzydejm@2hamed I'm bewildered at how you manage to keep composure and confidence while being completely wrong in an area you clearly haven't researched.
@rickasaurus Ok, then I think that if we found a way to execute target goal optimization without causing unexpected side effects or we succeeded in limiting the scope of unexpected side effects, I think he might be convinced.
@GidMK We know what the numbers are, but we don't really know how they were chosen. They could've been chosen by setting various values and observing certain metrics.
@rickasaurus "Least action principle" sounds like a good idea in building reward functions, I'm not familiar enough with the research to say whether its been explored already...
@rickasaurus To my knowledge, value maximizers have no such limitations on complexity i.e. they would take any complex action so long as they can maximize the reward value. (see the deepmind article I linked above)
@rickasaurus I think Robert Miles https://t.co/pAydfXiIbP is very good at explaining some of the concerns sharred by him and Eliezer Yudkowsky in sufficient detail.
@rickasaurus I'm actually pretty excited about LLMs becoming a breakthrough in AI safety, as at the very least they have the ability to encode and check specifications of near human complexity, something we didn't have before