wsippel
It’s partially EU funded:
NGI0 Entrust is made possible with financial support from the European Commission’s Next Generation Internet programme, under the aegis of DG Communications Networks, Content and Technology.
This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101069594.
https://nlnet.nl/entrust/
There are also Servo and WebKit. Servo was kinda dead for a while, but the project was recently transferred to the Linux Foundation and revived by Igalia, with funding from Futurewei. Not suitable for daily use yet, but worth keeping an eye on. WebKit is of course used by Safari (which I guess makes it the second most used browser engine after Chromium), but also Epiphany on Linux. I’m not aware of any Windows browsers using WebKit. Fun fact: Chromium was forked from WebKit, which in turn was forked from KDE’s KHTML and KJS engines.
Great idea! I think Lemmy and kbin could do with plugin systems, so instances could easily add something like this and other instance specific features if they want to.
Most web companies never turned a profit and have business models that wouldn’t ever be profitable, endless growth propped up by venture capital was all they strived for. Then hope they get acquired or do an IPO and let somebody else hold the bag - basically a Ponzi scheme. Investors realised this doesn’t really work anymore, so the free money ran out. Also, the AI gold rush means VCs have something else to throw their money at.
“Indeed, when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works,” the complaint reads.
Or, hear me out for a minute, if critiques, summaries or discussions about the works were in the training data. Unless the authors want to claim nobody ever talks about their works on the internet…
That’s the thing with AI: Unless the model creator provides a complete breakdown of the training material, as Llama, RedPajama or Stable Diffusion do for example, it’s basically impossible to prove what exactly is or isn’t in the training dataset.
I don’t get the XMPP thing. XMPP was an obscure protocol mostly used in non-federated applications (several MMOs use XMPP for in-game chat for example, obviously not federated). When Google and Facebook adopted XMPP and federated, the user base exploded, sure. Then they defederated, and XMPP went straight back to where it was before. There was no EEE - it was EA: Embrace, Abandon. Google and Facebook didn’t extend or extinguish anything. If anything, Slack and Discord killed XMPP, not Google.
SDXL 0.9 seems absolutely amazing so far. It’s so much better at following instructions than any other SD foundation model it’s not even funny, and it can to tons of stuff out-of-the-box that would require at least an embedding with SD1.5. One thing I immediately noticed is that it handles color instructions properly most of the time. You can define tons of object colors, and it’ll usually only color the specified or undefined objects. I also tried things like character in a dirty environment
. SD1.5 and its finetunes would often make the character dirty, SDXL follows the instruction properly. Incredible potential.
When it comes to the refiner, I found that the recommended(?) 0.25 strength works well for environments and such, but for characters, it should be dialed way down. I still use it, at around 0.05, and that seems to do the trick. It still does what it’s supposed to at such a low strength, it still has a profound effect on fine detail like hair, but it doesn’t completely change the base generation nearly as much.
The idea is to monitor internal communications and do sentiment analysis to check if developers are toxic, too stressed or burned out. While the tech in general could of course be abused, the general idea sounds pretty good, as long as the AI is on-prem for privacy reasons and the employer is transparent and honest about it. Making sure employees are healthy, happy and productive sounds like a worthwhile goal. I wouldn’t want a human therapist monitoring communications to look for negative signs, but the AI can screen stuff, focus exclusively on what it was told to, and forget everything on command.
AIs don’t judge, don’t remember and don’t hold anything against me, so I’d rather have an AI screening my stuff than a human - especially my superiors.
And yes, I trust an AI I run myself. I know they don’t phone home (because they literally can’t) and don’t remember anything unless I go through the effort to connect something like a Chroma or Weaviate vector database, which I then also host and manage myself. The beauty of open source. I would certainly never accept using GPT-4 or Bard or some other 3rd party cloud solution for something this sensitive.
Ubiquity stuff is entirely on-premises, their (optional) cloud service is strictly for auth and remote access. Highly recommended, not just for the privacy conscious. Their ecosystem is also relatively affordable (compared to Aruba and Ruckus) and a joy to setup and maintain. No subscriptions or recurring fees.