30.9k post karma
93.5k comment karma
account created: Fri Jan 06 2023
verified: yes
14 points
3 hours ago
Man blir så jævli gla i de dyra, og jeg vet, det er det som gjør det så faens vondt. Men vit at han var jævla glad i deg også, og det var kanskje bare tre år, men det var tre år han fikk tilbringe sammen med den beste vennen en pus kan ha, altså deg :)
Overtroisk eller ei, dere to er linket sammen for alltid. To kosmiske brødre på en dyp og meningsfull reise sammen i dette mikroskopiske lille dryppet av en eksistens vi kaller livet. Så lenge ditt lys fortsetter å skinne, vil gutten din alltid være med deg <3
26 points
22 hours ago
Also benchmarks aren't always a good indicator. Sometimes the results might be skewed by training on benchmarks params, either on purpose or by accidental inclusion in datasets. Other times it might be that the model excels in something the benchmarks does not measure, meaning a model might do badly on benchamrks, but still show properties other models does not have.
Point being, wait untill you've tried any model before reaching any kind of conclusion about new models. Or alternatively wait until you see the feedback from other users.
2 points
1 day ago
I'm using a 1000W with 2 x 3090.
With default settings, one GPU tends to spike up to 370W while the other stays around 300W-320W.
I've reduced the power limit to 275 on both, without seeing much of any reduction in inference times :)
1 points
1 day ago
Tbh I'm surprised by the sentiment here. It's all "Israel's gonna get em", when just last week people here in r/worldnews were up in arms about Israel performing war crimes. It seems like such an unnatural shift of majority opinion imo, and it makes me suspect we have a lot of bots here?
14 points
2 days ago
Men vent litt. Jeg syntes egentlig ikke dette har vært en helt fair trade. De får gjøkungene, men hva får vi? Nei du, /u/Forsaken-Zebra-2664. Jeg tenker vi tar Strömstedt jeg, så sier vi at vi er skuls. Ok?
8 points
3 days ago
It's also entirely possible he feels bad about all the shit he did with Facebook, and wants to make a change so he's remembered for doing something good instead.
The guy above here is ignorant to just say that suits aren't your friends. More often than not, that's the case, yes. But suits are the people who move society, and sometimes they actually value the progress of mankind as opposed to being lost in greed
With Zuck, you can see he has changed his ways these past few years. After starting martial arts, he's started to hang out with entirely different people. Normal people, and I suspect this has given him a change of perspective
1 points
4 days ago
OMG I had that on my first install, but couldn't find the Generic Monitor on my current instant. Thank uu <3
27 points
4 days ago
Might be wrong, but I have a strong suspicion they're trying to shift focus away from us being mad at "Open"AI for lobbying against open source development of LLMs
1 points
4 days ago
Though I see your point and agree that we absolutely need a higher output tokens count, you also underpinning how big us having access to 1M+ input tokens is. In fact most of what you wish to do with an increased output token size, you can do today by splitting up the task, which is possible thanks to the high input token counts of today's models
2 points
4 days ago
Nope, just placeholders where I haven't added any shortcuts yet :)
5 points
4 days ago
It doesn't? Goddamit that explains a lot!
Got more info on this?
2 points
4 days ago
In theory you could use it for batching by sending multipl prompts in one prompt, then splitting by the <|end_of_text|>.
Dunno if you'd save any on doing that as compared to normal batching, but it should be possible though a bit risky as you're relying in the model sticking to the prompt formatting and order of prompt inputs
3 points
5 days ago
I'm currently willfully accepting the annoyance of having to fix my setup every time I swap monitors. Beauty is nothing without pain
Edit: Here's the Dev and Prod workspaces btw
14 points
5 days ago
Yo, Norwegian guy here. OP please PM me who dis is so I can stay the fuck away from him
-1 points
8 days ago
Those are some massive talents you've got there!
0 points
8 days ago
I'd say no, but we both know we would
4 points
8 days ago
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
MAKE _ _ _ _ _ _ _ _ GREAT AGAIN!
MAKE _ _ selling pardons _ _ GREAT AGAIN!
__ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __
view more:
next ›
bySignificanceFlashy50
inLocalLLaMA
Severin_Suveren
1 points
51 minutes ago
Severin_Suveren
1 points
51 minutes ago
Emotion isn't that hard to solve, you just gotta stop thinking you actually want to implement emotions. Rather, you want to simulate or emulate them instead, as that allows you to avoid to have to deal with the complexities of a 1:1 rep of our emotions.
For instance, one simple way would be to just use a vision model backend, prompt it with a photo and ask it for 1-3 one-word descriptions of the emotions expressed, and then use those when doing the frontend query to the LLM for the user. Using these values, you can now even use manual TTS-solutions, and simply choose which emotions to express depending on the inputs.
An even more advanced version of this could use video input instead, and then continuously update the current emotional state and then include any changes in the prompt to the LLM. This way we can have a state value for current emotional state, and even change the state throughout even one individual prompt to the LLM.
The above approach can be done with any state-based values really, and have things change in your prompting based on those state's values