Avatar

chinpokomon

chinpokomon@beehaw.org
Joined
0 posts • 30 comments
Direct message

In their interactions and personal knowledge, perhaps he was. If you personally don’t know Danny or anyone else involved, your only exposure is what you’ve heard presented and made public. If you personally knew Danny and hadn’t witnessed any of these crimes yourself, you now have a conflicted view of someone who is both your friend and now guilty of 2 counts of sexual assault. While that conviction almost certainly changes your relationship going forward, it doesn’t change how you thought of that individual beforehand.

Ashton and Mila were asked to write letters of character that described the Danny they knew. It doesn’t change the outcome of the trial, but as with matters that carry different sentencing structures awarded by the judge, a judge will often take letters like this to determine what is appropriate. Is there a chance that the defendant will repeat this offense? What punishment, if any, will be restorative to the victims? How does this punishment affect everyone, including families established years afterwards? Is the defendant the same person today as they were when they committed these crimes?

These aren’t matters easily decided and therfore it isn’t surprising to see letters of character submitted either as part of the trial or during sentencing. If there is a patten of behavior, then sentencing might be maximum allowed, but if there’s no clear discernable behavior, then sentencing might be light.

I don’t know all the details that was considered, but based on my knowledge from reports, I think 15 years concurrent would have been appropriate. However, I don’t have all the evidence or material to make an informed decision. I don’t look upon these letters ss reflecting poorly on Ashton or Mila as they were just doing what was asked of them to help give the judge the context necessary to carry out an appropriate sentence. They aren’t guilty of doing anything wrong, more than the lawyers defending a now convicted and sentenced rapist.

permalink
report
parent
reply

This is in NV, but what I don’t know is what jurisdiction applies. Involved might be federal, state, or county agents. The Ranger truck might suggest that it was federal, but the highway would have been state managed.

Edit: Read elsewhere that it was a Tribal Ranger truck, so yet another jurisdiction. Excessive response and the conduct is under review.

permalink
report
parent
reply

How does Russia want our help? Strange that they’re asking the West to help with the Ukranian drone attacks, but they have my support.

permalink
report
reply

I try to use both equally, because I’m always on the hook for picking the “doomed” standard in any 50/50 contest.

I can relate to that. It usually isn’t a coin flip for me though. I’ll align with some technology over another because I truly can see an advantage. That technology might be the underdog from the beginning. Consider that we’re evaluating Firefish vs. Lemmy vs. Kbin whereas all of them combined are the underdog for certain more well established social forums. I engage with all three (and others still), because I don’t know the future.

permalink
report
parent
reply

That’s the same theme of a reply I made yesterday. I read the article and might have even boosted it myself because as a fediverse citizen, I’m concerned about any government agency seizing an instance like this. The “well known racist” claim is demonstrably false, because I still don’t know who they are talking about nor would I know the person behind a username.

permalink
report
parent
reply

I think a human might consider the meaning about what is being said whereas an LLM is only going to consider what token is the best one to use next. Humans might not be infallible, but they are presently better at detecting obvious BS that would slip undetected past an AI.

Maybe this is an opportunity we haven’t considered. This is the chance to create a Turing CAPTCHA Test. We can’t use Glorbo to do so, because it has been written, but perhaps it makes sense that there is a nonsensical code phrase people can use to identify AIs, both with markers intentionally added to LLM training models, buried in articles written by human authors, and a challenge/response which is never written down and only passed verbally through real human-human interactions.

permalink
report
parent
reply

I grabbed some the last time I saw them in the store. Still waiting for them to come back.

permalink
report
reply

Grandview seemed to do the best in clearly identifying the character 0. Is it an O, 0, I, l, or 1? Even without an example of O clearly visible in the sample text, the shape of 0 was very clear and seems like it should stand apart. Not the only reason to select a font, but it might be important to some.

permalink
report
reply

Arguably it is a strength. Unless a user has used the same username and password for different instances, their credentials on one instance are shielded from exploit over the whole network. The potential risk can only really be determined by how security was breeched. If it was social engineering, then there isn’t any other direct concern. If it was a vulnerability in software, then the same attack could be played out on other instances, but that’s not any different than other systems like a Linux kennel exploit.

permalink
report
parent
reply

If a human can access your public repo and read comments posted on public forums, are they stealing your code? LLMs are just aggregators of a great many resources and they aren’t doing anything more than a biological human can already do. The LLM can do so more efficiently than a biological human, while perhaps being more prone to error as it doesn’t completely understand why something is written the way it is. As such any current AI model is prone to signpost errors, but in my experience it has been very good at organizing the broader solution.

I can give you two examples. I started trying to find out how a .Net API call was made. I was trying to implement a retry logic for a call, and I got the answer I asked. I then realized that the AI could do more for me. I asked it to write the routine for me and it suggested using a library which is well suited for that purpose. I asked that it rewrite it without using an external library and it spit it out. I could have written this completely from scratch, in fact I had already come up with something similar but I was missing the API call I was initially looking for. That said, the result actually had some parts I would have had to go back and add, so it saved me a lot of time doing something I already knew how to do.

In a second case, I asked if to solve a problem which at its heart was a binary search. To validate that the answer was correct it would need to go one extra step, but to answer the question it wasn’t necessary to actually perform that last validation step. I was looking for the answer 10, but I got the AI to give me answers in the range of 9-11. It understand the basic concepts, but it still needs a biological human to validate what it generates.

permalink
report
parent
reply