Skip to main content

"If you're seeing this message, that means JavaScript has been disabled on your browser.

Please enable JavaScript to make this website work."

I'm so sick of websites websites refusing to even display text and images if I don't agree to run their proprietary Javascript on my computer. Isn't it time that browsers started treating requests to run Javascript like requests to use the mic or camera, and asked the user before allowing them? Ideally with crowdsourced info about what the scripts are, and what they do? In other words, make something like #NoScript a standard part of browsers.

@strypey Be careful what you ask for: that would kill the peer web before it started. JavaScript on the client isn’t the enemy. Business logic on the server is the enemy.

that's a fine distinction. As a developer, you're in a better position than I am to know. But you'll need to convince me. Because it seems to me that outsourcing processing work to the user's PC - via JS black boxes - is exactly how #SurveillanceCapitalism achieves massive scale, while claiming that #ThereIsNoAlternative to centralized server infrastructure. A myth they've propagated so effectively that even many developers have started believing it:

JavaScript isn't a black box, though. You can inspect all the code that's running in your browser.

Some JS is obfuscated, but it can be easily de-obfuscated. All browser-side JavaScript is effectively open source, even if it's not licensed as such.

If your concern is about privacy, it's not the JS running in your browser that should concern you. It's the data sent from the JavaScript to the server.

It would be reasonably simple to disable AJAX, thus preventing data to be sent to/received from the server, but allow all other JavaScript, allowing interactivity to still work.


fair points. But privacy is only a subset of a much larger concern, which is about *control*. Putting aside the argument we could have over the "black box" part of my post, the fact remains that:

> outsourcing processing work to the user's PC - via JS - is exactly how #SurveillanceCapitalism achieves massive scale, while claiming that #ThereIsNoAlternative to centralized server infrastructure.

There are many possible strategies for redecentralizing, and resolving the *many* problems with JS, some of which are described here:

I agree with @alcinnz that moving interactive functions back into native apps, leaving the web as a platform for static pages that don't require (or use) JS, is a strategy worth exploring.

Maybe the FSF should worry more about its logo appearing next to Google’s as they sponsor the same events than some ridiculous and ill-informed stance against a programming language that spreads FUD about potential alternatives. Remember that an AGPLv3 licensed app specifically built for drones to send hellfire missiles to little children would get the FSF seal of approval. Free Software is just a component of ethical tech but doesn’t care about ethics of use cases.

I share your concerns about open source events being sponsored by Google, as do FSF, but they can't control this. As for approving of child-killing drone software, that's FUD worthy of Microsoft. FSF have often spoken out about the use of freely-licensed code to do much less anti-social things than that:

Perhaps you could respond to the concerns laid out in 'The Javascript Trap' with some substance, rather than resorting to whattaboutism?
@danjones @alcinnz

as for the claim that the FSF's criticisms of Javascript are a ...
> ridiculous and ill-informed stance against a programming language

I note that they're far from alone in seeing JS as a problem. Plenty of experienced engineers have serious problems with it too. A quick selection off the top of my head:
@danjones @alcinnz

I'm aware of the holy wars that constantly rage for and against programming languages. But AFAIK JS is the only one that results in code being downloaded and run on the users computer on-the-fly. As onPon's article points out, that makes proprietary JS code effectively impossible to replace at the user end with free code. These are not trivial issues, and implying that they are suggests a failure to understand the scope of the problem.
@danjones @alcinnz

Right and what happens exactly when you have automatic updates on and a native app gets updated? Now what happens when you’ve allowed say Apple to use bitcode? Instead of vilifying JS when some of us are trying to build systems using it, let’s understand that the real issue is business logic on the server, proprietary/closed source code, and lack of reproducible builds. Spreading FUD about in-browser JS could jeopardise what I’m working on with

> the real issue is business logic on the server, proprietary/closed source code, and lack of reproducible builds.

Sure, these are all problems, and I get that JS isn't the only vector for them. But the way it's deployed makes it particularly vulnerable.You've left out the major architectural weaknesses of JS (eg the security audit nightmare created by dependance on hundreds of third-party modules). As for Apple, the FSF criticize their practices harshly elsewhere, as I'm sure you know.

> could jeopardise what I’m working on

This is neither here nor there, but it does suggest you're getting too emotionally close to the issues to be totally objective in your analysis. For the discussion to continue productively, it's probably best to look at it from 50,000 ft, and purely from a user POV, pretending for the sake of argument that you have no skin in the game as a technology creator.

You bet I’m emotionally invested in it – I’m not sitting in an ivory tower perpetuating some bs notion of neutrality in the matter while enjoying my tenure. I’m building what I’m building because I care about the issues not the other way around :) (And I’d further argue that objectivity is impossible for any being with self interests – even base ones like a need for food or shelter. The best we can be is transparent about our biases and subjectivity.)

make no mistake, I believe you really do care about protecting users from surveillance capitalism, as I do, which is why I regularly signal boost your stuff. All I'm saying is that folks who really like their hammers have a tendency to start seeing every problem as a nail. It's important not to let a sunk cost fallacy prevent you from considering other options that might also work, and corner you into interpreting any suggestion along those lines as a demand to ban hammers ;)

> objectivity is impossible

True but only because we are limited by our physical nature.

> for any being with self interests

Yet some people act independently of their self-interests.

I'm one, so I know they exist.

I'm not particularly good or wise, I'm just curious.

I appreciate your work, but I agree that you are scared by something that isn't going to affect your work.
Let's assume that one days #JavaScript and #WASM execution becomes opt-in in #Web #browsers, your application is not a web browser, so nothing would change.

This is not #FUD, but a real attack from the #Russian gov (mostly) to their citizens:

#JS enabled this #surveillance, and if it was #Google doing the same we would have never known it.

Right, and as an experienced engineer (if 35 years of experience in programming counts for anything) who cares deeply about this problem, I’m telling you that a client-side-only JavaScript-based approach is essential to building a bridge from surveillance capitalism to a peerocracy. If you haven’t, please read what I wrote here, where I explain why JS in browser – if not linked to business logic on the server – is not the enemy:

I finished reading this article today, after starting it yesterday, and reposting it so anyone following my account on the birdsite gets it, as well as those on the fediverse. It's a good overview. I didn't say JS is the enemy, I argued that it ought to be opt-in, so that:
a) users can protect themselves from the harms it is already known to do
b) web designers are incentivized to use it only when it's really needed, not use it to add trackers etc to what ought to be static pages

I thought we agreed the other day that these concerns are orthogonal. I'm not against your webapps, but auto-executing JS is a valid concern.

If you're afraid of lack of JS hurting you, I encourage you to write good <noscript> messages promoting the native apps.

P.S. As you should well know from Better there's plenty of Fear, Uncertainty, and Doubt to be had around what JS is doing.

P.P.S. I have read your blogpost.

I disagree. Controlled code execution in any of both places is fine.
If people can know and can control what is executed that's fine. IMO.

JavaScript and Browser infrastructure is heavy for lightweight clients. Ethics also have the accessibility point... Hm...

I don't begrudge you for using the expedient tools of today, but I must request stand in the way of developments that NEED to happen to ensure all the software we run is libre. So we can do better than the whack-a-mole efforts like your Better (which I really appreciate, btw, thank you!).

Apple and Microsoft already learnt their lesson about auto-executing CD software, but that vulnerability only moved to The Web.

I agree strongly with you that logic must be clientside.

I disagree that The Web is a good tool for that. It has it's place, but we really need to pushing for libre native apps following open standards.

And browsers should assist with that be pointing people towards the apps they need to use those standards securely. I've implemented that for Odysseus.

Agree. But we need untrusted relay nodes to guarantee findability and availability to match expectations of the levels possible in centralised systems if we want to build a bridge from here to there. We can shed those training wheels once there are enough nodes. But without them we won’t get adoption.

I do not see how these untrusted relay nodes (which yes, we do need), but I do not see how that relates to the argument we are having. Why can't we be distributing native apps for each platform?

I agree. But I don’t have the resources to do so. So I’m starting with an offline browser app first and then hopefully we’ll inspire others to build native apps. But there is no one size fits all; a plurality of approaches and initiatives can only make us stronger.

And as I said at the start, I don't begrudge you for doing so.

It's also important to distinguish between web apps and web pages.

A website for a restaurant is not an app. It doesn't need Javascript. I'm disinclined to enable JS for it just to see their phone number.

A website that is acting as a Matrix client? That's acting as an *app*, and I'm more inclined to enable JS. Of course it can run code, just as I would allow a native app to do.

(My *ideal* would be sandboxed native apps shipped with a standard package manager.)

It's not the same of an application.

When you download an application, you can compare it's cryptographic hash with that got by your friends. Usually you don't need to authenticate to download so the chance you get a malicious version specifically crafted for you are lower.

With a #WebApplication any #CDN that the developers trust could customize the code you execute exactly for you.

And sandboxing is not enough.

1. Download an app from the App Store.

2. Check its hash how, exactly? The developer doesn’t even know it.

With a client-side-only JS app:

1. Include a single script file using subresource integrity.

2. Publish the hash of the HTML file for independent verification.

3. Use a browser extension to verify the hash of the HTML.

You’ve verified the app.

Not yet enough to be safe, @aral, but a good starting point:

If the application is completely client side (never does any network connection) it might be enough.

Otherwise further restrictions should be in place and they require changes to the browsers:

a new islandic mailprovider CTemplar implemented checksum verification this way and published the hash on github. Their blogpost:

Thanks for the heads up, will read up :)

this is a fantastic piece of writing. Get out of my head! :)
@alcinnz @buoyantair @VeintePesos

> Business logic on the server is the enemy.


Both are important, valid approaches to different domains. Browser tech still isn't very peerful: Dat requires DHT servers to coordinate even browser-based peers, SSB requires "pubs" that can't run in the browser, federated tech like ActivityPub requires servers, and the append-only log datastructures used by much P2P tech are wholly inappropriate for many common applications where ephemerality matters.

The "Peer Web" has nothing to do! It makes for a twisted argument for logic in content, though. Take Git: the fact that people publishes all of their code in Github does not remove the distributed nature of Git. What will happen in any model of a "Peer Web" is that somebody will offer services to set up content using "peer" technology and, boom, "the Peer Web" will start centralizing again. Scripting in content would still be a serious problem.

no way. put nothing on the server but a decryption key--no logic, no data, nothing--and then store whatever data you want on users' machines encrypted so they can't read it or know why it's there or what it does. being able to read the source code on the web page will tell them nothing about what you're storing.

honestly, you should rethink your stance on this.