masterbaiter69
Flag: Japan
Registered: September 21, 2022
Last post: February 15, 2024 at 9:47 AM
Posts: 1168
1 •• 6 7 8 9 10 11 12 •• 23

Leroge Leddes clears

posted 11 months ago

yeah i think it's difficult for all those uneducated VLR users that spams "allat" to understand this masterpiece

posted 11 months ago

is HunterxHunter and it's not even close
the problem is, we might never see the ending

posted 11 months ago

what did i do wtf

posted 11 months ago

finally what???

posted 11 months ago

no one hates him in person, those haters hates everyone because their life is miserable

posted 11 months ago

T1 is legit looking good

posted 11 months ago

do you still think PRX 2nd is overrated?

posted 11 months ago

🤓 DRX were saving strats for masters
🤓 DRX were practicing with a new team
yeah no shit they were saving strats when the first place gets a free pass to playoff
fucking idiot

posted 11 months ago

best player in the world
+31 against DRX

posted 11 months ago

post your d1ck

posted 11 months ago

no it's not but i guess it's hard to prove
all good

posted 11 months ago

it's not

posted 11 months ago

NAVI flair and a stupid take, nothing special

posted 11 months ago

F

posted 11 months ago

destroy them prx

posted 11 months ago

messi clears

posted 11 months ago

put him on reyna

posted 11 months ago

something breach is a troll put him on jett

posted 11 months ago

ok

posted 11 months ago

what is manko?

posted 11 months ago

so good wtf

posted 11 months ago

we will never see him play in franchise league again
rip

posted 11 months ago

something best player in the world

posted 11 months ago

THIS IS THE ONLY REASON THAT MAKES SENSE, EVERY OTHER REASONS MAKES NO SENSE

posted 11 months ago

i don't agree with that.
the delusional fans are just stupid and can't understand that ZETA needs a roster change. Instead they genuinely believes that they can win something with this roster like they did in Reykyavik.
i understand your point tho

posted 11 months ago

ほとんどのファンが全く結果を気にして無いってのはさすがに言い過ぎじゃ無い?「今のチームで変えずに頑張ってほしい」の気持ちが強すぎてるだけで

posted 11 months ago

makes sense, same things can be said about my country

posted 11 months ago

i love Dominican republic, i went there with my family when i was like ten years old but it was such a nice country

posted 11 months ago

who? like who are you? like actually who the fuck are you? who does even know you? does anyone even know that you exist? I'm sorry but who are you?

posted 11 months ago

haha u are so funny hahaha

posted 11 months ago

where are you from, like actually

posted 11 months ago

yo who was that
i demolished you lmao

posted 11 months ago

whoever that was, ggs

posted 11 months ago

it's 11pm for me what else should i even do?

posted 11 months ago

install it again!!

posted 11 months ago

of course, no one could beat me

posted 11 months ago

my bad i forgot to change

posted 11 months ago

anyone down to fight?
i'm kinda good at this game(ultimate champion 🤓)
i'll do anything if i lose

https://link.clashroyale.com/invite/friend/jp?tag=9R9V929PU&token=acn9fh6y&platform=iOS

posted 11 months ago

good job you did the right thing

posted 11 months ago

he disappeared right before lan lol
that proves everything

posted 11 months ago

game for kindergarten kids

posted 11 months ago

worse

posted 11 months ago

stop playing that dogshit game

posted 11 months ago

don't disrespect my name like that brother

posted 11 months ago

An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.

I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
The rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it pastthat point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
If that’s our state of ignorance for GPT-4, and GPT-5 is the same size of giant capability step as from GPT-3 to GPT-4, I think we’ll no longer be able to justifiably say “probably not self-aware” if we let people make GPT-5s. It’ll just be “I don’t know; nobody knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.

posted 11 months ago

Pausing AI Developments Isn't Enough. We Need to Shut it All Down

posted 11 months ago

living in peace

posted 11 months ago
1 •• 6 7 8 9 10 11 12 •• 23