I write about technology at theluddite.org

  • 2 Posts
  • 160 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle

  • I don’t like this way of thinking about technology, which philosophers of tech call the “instrumental” theory. Instead, I think that technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that simpler tools, like a hammer or or whatever, don’t help us explain. Similarly, society shapes the way that we make technology.

    In making technology, engineers and designers are constrained by the rules of the physical world, but that is an underconstraint. There are lots of ways to solve the same problem, each of which is equally valid, but those decisions still have to get made. How those decisions get made is the process through which we embed social values into the technology, which are cumulative in time. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them. We wouldn’t even be fighting about self-driving cars had we made different technological choices a while back.

    That said, on the other side, just because technology is more than just a tool, and does have values embedded within it, doesn’t mean that the use of a technology is deterministic. People find subversive ways to use technologies in ways that go against the values that are built into it.

    If this topic interests you, Andrew Feenberg’s book Transforming Technology argues this at great length. His work is generally great and mostly on this topic or related ones.



  • I am once again begging journalists to be more critical of tech companies.

    But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

    […] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

    This is the wrong comparison. These are taxis, which means they’re driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it’s precisely when they’re all driving).

    We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.

    edit: The leaked data on human interventions was from Cruise, not Waymo. I’m open to self-driving cars being safer than humans, but I don’t believe a fucking word from tech companies until there’s been an independent audit with full access to their facilities and data. So long as we rely on Waymo’s own publishing without knowing how the sausage is made, they can spin their data however they want.

    edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.


  • I live in Vermont. These rosy articles about Front Porch Forum come out every so often, and, as someone who writes about the intersection of tech and capitalism, they frustrate me.

    First things first, it’s a moderated mailing list with some ads. I don’t know if it even makes sense to call it a social network, honestly. It’s a great service because moderated mailing lists are great. Here’s the problem:

    To maintain this level of moderation, the founder does not want to expand Front Porch Forum beyond Vermont’s borders. He highlighted Nextdoor, another locally-focused social media platform that has expanded internationally, which has often been accused of inflaming tensions within communities due to its more relaxed moderation policy. However, Sabathier believes that local social media similar to Front Porch Forum could work elsewhere in the US, including in less progressive states – Vermont, the home of socialist Senator Bernie Sanders, was the state that cast the fewest votes for Trump in the November 2024 election. “It’s not so much a political platform as a tool for communities to organize themselves and be more cohesive,” said the researcher. “And that would be beneficial everywhere.”

    Capitalism makes this world impossible. Front Porch Forum is a private business owned by a guy (technically, it’s a public benefit corporation, but those are toothless designations). Like so many beloved services, it’ll be great until it’s not. Eventually, cofounders, as lovely and well meaning as they might be, leave, move, die, whatever, and someone shitty will end up in control. Without a corporate restructuring into, say, a user cooperative, it is just as doomed as every other internet thing that we’ve all loved. These puff pieces always act like Vermont is a magical place and, frankly, it is, but not like this. We live under capitalism too. Sometimes, due to being a rural, freezing, mountainous backwater, we get short reprieves from the worst of it, but the problem with social media is systemic.

    AMA I guess.






  • I completely and totally agree with the article that the attention economy in its current manifestation is in crisis, but I’m much less sanguine about the outcomes. The problem with the theory presented here, to me, is that it’s missing a theory of power. The attention economy isn’t an accident, but the result of the inherently political nature of society. Humans, being social animals, gain power by convincing other people of things. From David Graeber (who I’m always quoting lol):

    Politics, after all, is the art of persuasion; the political is that dimension of social life in which things really do become true if enough people believe them. The problem is that in order to play the game effectively, one can never acknowledge this: it may be true that, if I could convince everyone in the world that I was the King of France, I would in fact become the King of France; but it would never work if I were to admit that this was the only basis of my claim.

    In other words, just because algorithmic social media becomes uninteresting doesn’t mean the death of the attention economy as such, because the attention economy is something innate to humanity, in some form. Today its algorithmic feeds, but 500 years ago it was royal ownership of printing presses.

    I think we already see the beginnings of the next round. As an example, the YouTuber Veritsasium has been doing educational videos about science for over a decade, and he’s by and large good and reliable. Recently, he did a video about self-driving cars, sponsored by Waymo, which was full of (what I’ll charitably call) problematic claims that were clearly written by Waymo, as fellow YouTuber Tom Nicholas pointed out. Veritasium is a human that makes good videos. People follow him directly, bypassing algorithmic shenanigans, but Waymo was able to leverage their resources to get into that trusted, no-algorithm space. We live in a society that commodifies everything, and as human-made content becomes rarer, more people like Veritsaium will be presented with more and increasingly lucrative opportunities to sell bits and pieces of their authenticity for manufactured content (be it by AI or a marketing team), while new people that could be like Veritsaium will be drowned out by the heaps of bullshit clogging up the web.

    This has an analogy in our physical world. As more and more of our physical world looks the same, as a result of the homogenizing forces of capital (office parks, suburbia, generic blocky bulidings, etc.), the fewer and fewer remaining parts that are special, like say Venice, become too valuable for their own survival. They become “touristy,” which is itself a sort of ironically homogenized commodified authenticity.

    edit: oops I got Tom’s name wrong lol fixed






  • I cannot handle the fucking irony of that article being on nature, one of the organizations most responsible for fucking it up in the first place. Nature is a peer-reviewed journal that charges people thousands upon thousands of dollars to publish (that’s right, charges, not pays), asks peer reviewers to volunteer their time, and then charges the very institutions that produced the knowledge exorbitant rents to access it. It’s all upside. Because they’re the most prestigious journal (or maybe one of two or three), they can charge rent on that prestige, then leverage it to buy and start other subsidiary journals. Now they have this beast of an academic publishing empire that is a complete fucking mess.




  • My two cents, but the problem here isn’t that the images are too woke. It’s that the images are a perfect metaphor for corporate DEI initiatives in general. Corporations like Google are literally unjust power structures, and when they do DEI, they update the aesthetics of the corporation such that they can get credit for being inclusive but without addressing the problem itself. Why would they when, in a very real way, they themselves are the problem?

    These models are trained on past data and will therefore replicate its injustices. This is a core structural problem. Google is trying to profit off generative AI while not getting blamed for these baked-in problems by updating the aesthetics. The results are predictably fucking stupid.


  • I have worked at two different start ups where the boss explicitly didn’t want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don’t even get me started on people that the CEO wouldn’t have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.

    It’s very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.


  • This has been ramping up for years. The first time that I was asked to do “homework” for an interview was probably in 2014 or so. Since then, it’s gone from “make a quick prototype” to assignments that clearly take several full work days. The last time I job hunted, I’d politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.

    My experience with these interactions is not that they’re looking for the most qualified applicants, but that they’re filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It’s the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.