TIME.mk

ВЕСТИ

NEWS

Gorgi Kosev @spion
#reach: 1232   ( 1565th )
#followers: 1232   ( 3787th )
@Neolectron @ThePrimeagen @vitimus1 @t3dotgg Based on what I'm reading about it, its not that ESM is bad its that the transition was poorly standardized and there are now many ways to specify what is or isn't ESM and where different kinds of module sources are.
0ReTweets2Favorites·Details
@Neolectron @ThePrimeagen @vitimus1 @t3dotgg In JS land it seems that even somewhat poor standards are better than late standards; late means an ecosystem with a billion option sprawls quickly and its hard to contain later.
0ReTweets2Favorites·Details
@Neolectron @ThePrimeagen @vitimus1 @t3dotgg I would argue that not providing a framework to standardize module resolution and leaving it all for userland to resolve is also to blame.
0ReTweets2Favorites·Details
@dan_abramov I agree its super exciting, but I'm also a little worried. Based on what I've read so far, it looks like the way we train / "fix" these things we're equally likely to end up with results that are more often correct as well as results for which we can't tell whether they're wrong
0ReTweets2Favorites·Details
0ReTweets1Favorites·Details
@unclebobmartin @jerzydejm @2hamed I'm bewildered at how you manage to keep composure and confidence while being completely wrong in an area you clearly haven't researched.
0ReTweets1Favorites·Details
@unclebobmartin @ThePunterPete @Sileence @jerzydejm @2hamed And here is another one, which I'm sure you'll ignore, but others that might happen to read this thread could find fascinating https://t.co/lDVlrGfU24
0ReTweets1Favorites·Details
@rickasaurus Are you familiar with the reasons why he thinks it would?
0ReTweets1Favorites·Details
@rickasaurus Ok, then I think that if we found a way to execute target goal optimization without causing unexpected side effects or we succeeded in limiting the scope of unexpected side effects, I think he might be convinced.
0ReTweets1Favorites·Details
@rickasaurus That does happen repeatedly in practice https://t.co/oeSCdi4qgN
0ReTweets1Favorites·Details
@ThePunterPete @Sileence @unclebobmartin @jerzydejm @2hamed Anyways, here is a fun article https://t.co/qCYLAaKmv7 and its corresponding paper https://t.co/PMr4sPiBDx that is somewhat related to that discussion (it studies how LLM models learn games like these)
0ReTweets0Favorites·Details
@unclebobmartin @ThePunterPete @Sileence @jerzydejm @2hamed Yeah its not my job to do your research for you, or even to convince you to do it yourself. I did post an article (OthelloGPT) for the curious.
0ReTweets0Favorites·Details
@GidMK We know what the numbers are, but we don't really know how they were chosen. They could've been chosen by setting various values and observing certain metrics.
0ReTweets0Favorites·Details
@rickasaurus "Least action principle" sounds like a good idea in building reward functions, I'm not familiar enough with the research to say whether its been explored already...
0ReTweets0Favorites·Details
@rickasaurus Are we doing the least action possible to achieve our evolutionary goals? (propagate genes)
0ReTweets0Favorites·Details
@rickasaurus To my knowledge, value maximizers have no such limitations on complexity i.e. they would take any complex action so long as they can maximize the reward value. (see the deepmind article I linked above)
0ReTweets0Favorites·Details
@rickasaurus I think Robert Miles https://t.co/pAydfXiIbP is very good at explaining some of the concerns sharred by him and Eliezer Yudkowsky in sufficient detail.
0ReTweets0Favorites·Details
@rickasaurus I'm actually pretty excited about LLMs becoming a breakthrough in AI safety, as at the very least they have the ability to encode and check specifications of near human complexity, something we didn't have before
0ReTweets0Favorites·Details
Најчесто разговара со:
#Name@Nick#дискусии