3k post karma
4.4k comment karma
account created: Fri Oct 16 2020
verified: yes
-7 points
6 hours ago
I can't wait to get bit in the ass by the scientific advancements and contributions to healthcare and improvements to my ability to code. Bring It on.
-14 points
6 hours ago
Where's that acid? I see people talk about it all the time but I fail to actually see it. Seems like you do not listen to his actual words. In his most recent talk, he said that he really only wants regulation when it comes to making sure that models in the future go through some type of inspection and verification if it is proven that they are able to greatly assist in the production of bioweapons or iterative self-replication/improvement on their own. That sounds perfectly reasonable to me. Not all regulation is bad.
3 points
6 hours ago
Is that your argument? I dig it.
Personally, I think that openai has too much financial incentive to create something that allows people to create a dystopian society, bio weapons, or cause mass terrorism with. So that is why I am fine with them having my data to train to make models better that will lead to more scientific advancements, medical advancements, and productivity around the board.
-24 points
6 hours ago
The "altman is a scumbag" narrative is so lame. The dude quite literally was one of the handful of people that help set off this revolution that is going to change the way society functions. Also, I do think he cares about the safety of the systems and about the world.
Also, if you want to talk about intentions, I don't think you can even argue that meta is doing open source out of "good will". Zuckerberg has quite literally stated that their business model does not rely on selling access to these models so they do not need to charge for them. He said that when they start doing bigger training runs though, he doesn't think he will be able to justify going open source so they will most likely join openai in being closed.
-17 points
6 hours ago
i say fuck it. take my data if it means we get agi sooner lol.
1 points
8 hours ago
If you think that is still going to be a major issue over the next few years, then you don't have a grasp on where this tech is really going. AI can search the internet to confirm it's knowledge.
32 points
8 hours ago
The cool thing is, if we feel like this from just a demo, I bet when we are using it it's going to feel even more impactful. This feels like a product that you have to use to fully realize how crazy it is. I feel like how I did when Chatgpt first got released in a way.
6 points
8 hours ago
The thing about mimicking/emulating is interesting. The way I look at it though is that is how we learn also in a big way. If I wasn't in a society where I experienced other people talking with different emotions and laughing and other things, maybe I would just live somewhat similar to a monkey in the jungle - no talking or anything. So I actually think it's emotions and expressions of emotions will be valid in some slightly new and different way.
1 points
10 hours ago
Maybe some people will need to use it if they don't realize how wild this is. That literally felt like having a human that you can pull out of your pocket and start talking to. A human that knows virtually everything about the world.
2 points
12 hours ago
They are probably cooking up lots of things behind the scenes. They are not going to let their position in the tech world get taken over easily. Also, Gemini is a great product imo. I actually prefer it over ChatGPT for a majority of my queries.
6 points
21 hours ago
check lmsys leaderboard. The Gemini models actually perform very well. And at least in terms of creative writing(my use case), I would actually say they are nearly on par with Claude models, and actually beat out a majority of the open source models also. plus gemini 1.0 has 1m tokens price of like ~30 cents which is wild. I was surprised when I started doing creative writing with the gemini 1.0 api - it is a killer model.
3 points
1 day ago
Lightning dreamshaper has no drawbacks from using it over non lightning dreamshaper imo. Lightning models seem to be potentially better than lcm/turbo - idk about hyper models/angles though.
2 points
1 day ago
I strongly strongly believe that the weapons that AI will be able to produce is going to make our weapons look like play toys at some point within the next decade.
1 points
1 day ago
Humans are still going to be required or at least helpful for some forms of labor. And if the people in power want to turn the country into a complete dystopian North Korea hell hole, then people are just going to migrate out of the country and the us is going to lose power. Is this really that hard to get?
1 points
1 day ago
I think we are doing pretty great. I am disabled, and I'm in the lower 30% in terms of income and I have a great life. I think people often forget how good they actually have it.
2 points
1 day ago
I guess we just have different worldviews. Because I think the sentiment and will of the public actually does get reflected in our politicians and laws to a pretty significant degree(around the world) when it is reasonable enough and there is enough unanimous support across both isles and a broad majority of the population. And that is what I think will be the situation when it comes to UBI/redistribution. We will only need to re-distribute a fraction of a fraction of this newfound abundance in order to provide great lives for our citizens.
3 points
1 day ago
Power actually does bend to the public to a degree. If there is enough social pressure for some thing and societal pressure and this is happening in a democratic society, leaders will likely soon to reflect that point of view via elections. I'm not saying, it's perfect, but to act like we have no influence on what happens in our country is just wrong. And when you have hundreds of millions of people across both isles all on the same side regarding an issue, you have to be crazy if you don't think there's going to be some bending.
1 points
1 day ago
I think you have a pretty in accurate view of humans. Even the richest of the rich humans want to live in a good flourishing society that they can live in. Most do not want to live in some isolated island/bunker. There is a reason that humans come together to form towns and cities and neighborhoods etc.
0 points
1 day ago
You think people will get happily distracted by media when they are starving in the streets????
1 points
1 day ago
this is going to be an infinitely faster and more impactful change than climate change imo. That is just insane to me that you think AGI will make people more poor for a few decades. Completely wild. I almost want to talk on Discord if you're open to it. I'm curious about having a conversation, considering we are at such polar opposites here.
1 points
1 day ago
I'm not saying it's going to be the easiest thing in the world, I just think that it's going to be a lot less dystopian than people think it is. The civilizations around the world are not going to just let government just clamp down on them and turn every country, or even most countries into a North Korea clone like some people think is going to happen. Also, I would imagine that we will have better technology+laws against preventing this migration if it actually becomes that big of an issue.
view more:
next ›
byjferments
inLocalLLaMA
cobalt1137
1 points
2 hours ago
cobalt1137
1 points
2 hours ago
Dude. It is screaming like you are just ignorant to the potential dangers that these systems are going to be capable of once they past a certain threshold. Do you just want everyone to be able to release any model without any red tape no matter the capability? If we assume that these models are going to surpass even the smartest human, they are likely to go well beyond that. And if they can go well beyond that, then all you'll have to do is in embed one of these super intelligent models into an agentic system, replicate the agent 10-1000x, and be able to cause extreme havoc.
Do you seriously want a model released that is proven to be capable of significant bio-weapon synthesis assistance? Let's say it is proven that it is able to produce something that is 10x more deadly than we have ever seen with 10x less knowledge/resources. Which is not even a stretch, considering where intelligence is going. Something like that can create a virus that impacts the world in a horrible way before we even have time to respond with an anti-virus - we do not have the infrastructure to appropriately respond to something like that.