User's banner
Avatar

IngrownMink4☭

IngrownMink4@lemmygrad.ml
Joined
9 posts • 57 comments

Cybercommunist ☭ (maybe) and FLOSS translator (including Lemmygrad!)

Pronouns: he/him

Lemmy: https://lemmy.ml/u/IngrownMink4

Direct message

GNOME FTW 😎 enjoy your new hardware!!

permalink
report
reply

Looks like a GNOME-based DE, yeah.

permalink
report
parent
reply

I liked it even more when I saw a 2h video analysis (yes, 2 hours) in Spanish by a content creator I follow.

I leave it here in case anyone is interested in seeing it: https://www.youtube.com/watch?v=Bp036qa-rI8

permalink
report
parent
reply

Se me nota muchísimo cuando escribo a otros camaradas en inglés jajajaja

Por cierto, feliz navidad! :)

permalink
report
parent
reply

In this case, perhaps using AdNauseam could be useful for you.

permalink
report
reply

It’s sad to see this, but I’m not surprised. Ultra-nationalist Indians usually have this attitude on all social networks…

permalink
report
reply

Fair enough 😅 I know the participants are cringe, but I have shared it because I would like to hear your opinion from a Marxist perspective. GeoHot is an accelerationist and Connor I think tries to be “apolitical”, you know… lol

Anyway, I’ve put in the description of the post a summary of the transcript in case someone wants to know what they say without having to watch the video.

permalink
report
parent
reply

What’s more, any countries that try to put brakes on AI development will quickly find themselves at a disadvantage from countries that don’t. For this reason alone, AI will be seen as a national security concern by all major nations

In fact, we have seen that Americans are becoming increasingly fearful of AIs, in contrast to the Chinese, who generally trust AIs. This could be due to who has control over AIs. In the US, citizens are thinking about the most dystopian version of a large-scale implementation of these intelligence models because they know that the government will use it to further repress the working class. In China, government regulation of AIs generates trust because they trust the government. But as I mentioned in another comment, an open source AI for the whole population would be useless if such code is governed by a libertarian license like MIT/Apache 2.0, because of how easy it would be for the ruling class to appropriate this work to privatize and improve it to such an extent that the original code could not be measured against it.

This would allow for unprecedented level of economic planning efficiency.

Yes, in fact, isn’t that what the Chileans had in mind when they came up with Cybersyn? With the technological advances of our era, especially in the field of AI and so on, it would make sense to go back to this idea. China has the potential to implement it on a large scale in my opinion.

Then the model is trained to interact with the physical world through reinforcement and this leads it to to create an internal representation of the world that’s similar to our own. This gives us a shared context that we can use to communicate with the model trained in this fashion. Such a model would have actual understanding of the physical world that’s similar to our own, and then we could teach it language based on this shared understanding.

Regarding what you mention, I have a question (maybe it sounds stupid), but assuming that these AI learn and develop in a particular environment and become familiar with it in a similar way to humans, what would happen if these AI interact with something or someone outside that environment? That is, for example, if an AI develops in an English-speaking country (environment) and for some reason interacts with a Spanish-speaking person, the cultural peculiarities that the AI has learned in that environment are not applicable to this subject. Do you think it could give a false sense of closeness or technical limitation? idk if I’m making myself clear or if this is an absurd question 😅

permalink
report
parent
reply

I fully agree. And not only that, I’m also intrigued to know what licence GeoHot would choose to launch such an open source AI. If he chose the more libertarian option, he would probably use the MIT license. If so, any powerful entity could take that AI as a base, lock down the code and build a malicious AI based on the open source AI. In the end, all efforts to “democratise” open source AI would be in vain.

permalink
report
parent
reply