eth0p
In my other comments, I did say that I don’t trust this proposal either. I even edited the comment you’re replying to to explain how the proposal could be used in a way to hurt adblockers.
My issue is strictly with how the original post is framed. It’s using a sensationalized title, doesn’t attempt to describe the proposal, and doesn’t explain how the conclusion of “Google […] [wants] to introduce DRM for web pages” follows the premise (the linked proposal).
I wouldn’t be here commenting if the post had used a better title such as “Google proposing web standard for web browser verification: a slippery slope that may hurt adblockers and the open web,” summarized the proposal, and explained the potential consequences of it being implemented.
I don’t disagree with you. If this gets implemented, the end result is going to be a walled garden web that only accepts “trusted” browsers. That’s the concern here for ad blocking: every website demanding a popular browser that just so happens to not support extensions.
My issue is with how the OP framed the post. The title is misleading and suggests that this is a direct attempt to DRM the web, when it’s not. I wouldn’t have said anything if the post was less sensationalized, laying out the details of the proposal and its long-term consequences in an objective and informative way.
Because I Got High by Afroman?
A couple years back, I had some fun proof-of-concepting the terrible UX of preventing password managers or pasting passwords.
It can get so much worse than just an alert()
when right-clicking.
A small note: It doesn’t work with mobile virtual keyboards, since they don’t send keystrokes. Maybe that’s a bug, or maybe it’s a security feature ;)
But yeah, best tried with a laptop or desktop computer.
How it detects password managers:
-
Unexpected CSS or DOM changes to the
input
element, such as an icon overlay for LastPass. -
Paste event listening.
-
Right clicking.
-
Detecting if more than one character is inserted or deleted at a time.
In hindsight, it could be even worse by using Object.defineProperty
to check if the value
property is manipulated or if setAttribute
is called with the value
attribute.
You say that like it hasn’t been happening already for two decades.
https://www.cnet.com/news/privacy/fbi-taps-cell-phone-mic-as-eavesdropping-tool/
I can’t read French so I only have others’ translations and intepretations to rely on, but from what I understand, the differences here are that,
-
France lawmakers are being direct with their legislation, rather than relying on precedence or judges’ interpretations of anti-terrorism or national security bills; and
-
Privileged conversations (e.g. between client and attorney) can still be admissible when recorded surreptitiously this way.
Apparently it would still need to be pre-approved by a judge. That doesn’t inspire much confidence in it not being hand-wave allowed, though.
But the real questions is, can we change them?
Imagine this:
- Good thing → Zelda chest opening sound.
- Bad thing →The Office “Oh, god, please no.”
- User interaction → Honking goose.
- Log in → Futurama “Welcome to the world of tomorrow!”
- Log out →AOL “You’ve got mail”
Circular dependencies can be removed in almost every case by splitting out a large module into smaller ones and adding an interface or two.
In your bot example, you have a circular dependency where (for example) the bot needs to read messages, then run a command from a module, which then needs to send messages back.
v-----------\
bot command_foo
\-----------^
This can be solved by making a command conform to an interface, and shifting the responsibility of registering commands to the code that creates the bot instance.
main <---
^ \
| \
bot ---> command_foo
The bot
module would expose the Bot
class and a Command
instance. The command_foo
module would import Bot
and export a class implementing Command
.
The main
function would import Bot
and CommandFoo
, and create an instance of the bot with CommandFoo
registered:
// bot module
export interface Command {
onRegister(bot: Bot, command: string);
onCommand(user: User, message: string);
}
// command_foo module
import {Bot, Command} from "bot";
export class CommandFoo implements Command {
private bot: Bot;
onRegister(bot: Bot, command: string) {
this.bot = bot;
}
onCommand(user: User, message: string) {
this.bot.replyTo(user, "Bar.");
}
}
// main
import {Bot} from "bot";
import {CommandFoo} from "command_foo";
let bot = new Bot();
bot.registerCommand("/foo", new CommandFoo());
bot.start();
It’s a few more lines of code, but it has no circular dependencies, reduced coupling, and more flexibility. It’s easier to write unit tests for, and users are free to extend it with whatever commands they want, without needing to modify the bot
module to add them.
Frankly, I don’t trust that the end result won’t hurt users. This kind of thing, allowing browser environments to be sent to websites, is ripe for abuse and is a slippery slope to a walled garden of “approved” browsers and devices.
That being said, the post title is misleading, and that was my whole reason to comment. It frames the proposal as a direct and intentional attack on users ability to locally modify the web pages served to them. I wouldn’t have said anything if the post body made a reasonable attempt to objectively describe the proposal and explain why it would likely hurt users who install adblockers.
I suspect to get downvotes into oblivion for this, but there’s nothing wrong with the concept of C2PA.
It’s basically just Git commit signing, but for images. An organization (user) signs image data (a commit) with their public key, and other users can check that the image provenance (chain of signed commits) exists and the signing key is known to be owned by the organization (the signer’s public key is trusted). It does signing of images created using multiple assets (merge commits), too.
All of this is opt-in, and you need a private key. No private key, no signing. You can also strip the provenance by just copying the raw pixels and saving it as a new image (copying the worktree and deleting .git).
A scummy manufacturer could automatically generate keys on a per-user basis and sign the images to “track” the creator, but C2PA doesn’t make it any easier than just throwing a field in the EXIF or automatically uploading photos to some government-owned server.