• 2 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: May 8th, 2023

help-circle

  • The argument seem most commonly from people on fediverse (which I happen to agree with) is really not about what current copyright laws and treaties say / how they should be interpreted, but how people view things should be (even if it requires changing laws to make it that way).

    And it fundamentally comes down to economics - the study of how resources should be distributed. Apart from oligarchs and the wannabe oligarchs who serve as useful idiots for the real oligarchs, pretty much everyone wants a relatively fair and equal distribution of wealth amongst the people (differing between left and right in opinion on exactly how equal things should be, but there is still some common ground). Hardly anyone really wants serfdom or similar where all the wealth and power is concentrated in the hands of a few (obviously it’s a spectrum of how concentrated, but very few people want the extreme position to the right).

    Depending on how things go, AI technologies have the power to serve humanity and lift everyone up equally if they are widely distributed, removing barriers and breaking existing ‘moats’ that let a few oligarchs hoard a lot of resources. Or it could go the other way - oligarchs are the only ones that have access to the state of the art model weights, and use this to undercut whatever they want in the economy until they own everything and everyone else rents everything from them on their terms.

    The first scenario is a utopia scenario, and the second is a dystopia, and the way AI is regulated is the fork in the road between the two. So of course people are going to want to cheer for regulation that steers towards the utopia.

    That means things like:

    • Fighting back when the oligarchs try to talk about ‘AI Safety’ meaning that there should be no Open Source models, and that they should tightly control how and for what the models can be used. The biggest AI Safety issue is that we end up in a dystopian AI-fueled serfdom, and FLOSS models and freedom for the common people to use them actually helps to reduce the chances of this outcome.
    • Not allowing ‘AI washing’ where oligarchs can take humanities collective work, put it through an algorithm, and produce a competing thing that they control - unless everyone has equal access to it. One policy that would work for this would be that if you create a model based on other people’s work, and want to use that model for a commercial purpose, then you must publicly release the model and model weights. That would be a fair trade-off for letting them use that information for training purposes.

    Fundamentally, all of this is just exacerbating cracks in the copyright system as a policy. I personally think that a better system would look like this:

    • Everyone gets a Universal Basic Income paid, and every organisation and individual making profit pays taxes in to fund the UBI (in proportion to their profits).
    • All forms of intellectual property rights (except trademarks) are abolished - copyright, patents, and trade secrets are no longer enforced by the law. The UBI replaces it as compensation to creators.
    • It is illegal to discriminate against someone for publicly disclosing a work they have access to, as long as they didn’t accept valuable consideration to make that disclosure. So for example, if an OpenAI employee publicly released the model weights for one of OpenAI’s models without permission from anyone, it would be illegal for OpenAI to demote / fire / refuse to promote / pay them differently on that basis, and for any other company to factor that into their hiring decision. There would be exceptions for personally identifiable information (e.g. you can’t release the client list or photos of real people without consequences), and disclosure would have to be public (i.e. not just to a competitor, it has to be to everyone) and uncompensated (i.e. you can’t take money from a competitor to release particular information).

    If we had that policy, I’d be okay for AI companies to be slurping up everything and training model weights.

    However, with the current policies, it is pushing us towards the dystopic path where AI companies take what they want and never give anything back.


  • There’s a lot one side of the market can do with collusion to drive up prices.

    For example, the housing supply available on the market right now often fluctuates over time - once someone has a long term tenancy, their price is locked in until the next legal opportunity to change the price. If there is a low season - fewer people living in an area, higher vacancy rate - in a competitive market, tenants often want a long term contract (if they are planning on staying), and the landlord who can offer that will win the contract. The landlord gets income during the low period, but forgo a higher rent they might get during a high period. Another landlord who tries to take a hardline policy of insisting tenants renew their contract during the high period would lose out - they would just miss out on rent entirely during the low period, likely making less than the other landlord.

    Now if the landlords form a cartel during the low period, and it is not possible to lease a rental property long term, then the tenants have no choice but to be in a position to re-negotiate price during the high period. Landlords avoid having to choose between no revenue during the low period and higher during the high period, or a consistent lower rent - instead they get the lower rent during the low period (at a slightly lower occupancy rate, but shared across all the landlords) AND the higher rent during the high period.




  • It could also go the other way and someone could sue Google or other companies. Web browsers and ad blockers run on the client not the server, generally with the authorisation of the owner of said client system. It is a technical measure to prevent unauthorised code (i.e. unwanted ads) from running on the system, imposed by the owner of the system. Anti ad blocker tech is really an attempt to run software on someone’s computer by circumventing measures the owner of said computer has deployed to prevent that software from running, and has not authorised it to run. That sounds very similar to the definition of computer fraud / abuse / unauthorised access to a computer system / illegal hacking in many jurisdictions.



  • Maybe technically in Florida and Texas, given that they passed a law to try to stop sites deplatforming Trump.

    https://www.scstatehouse.gov/sess125_2023-2024/bills/3102.htm

    “The owner or operator of a social media website who contracts with a social media website user in this State is subject to a private right of action by a user if the social media website purposely: … (2) uses an algorithm to disfavor, shadowban, or censure the user’s religious speech or political speech”.

    In May 2022, the US Court of Appeals for the 11th Circuit ruled to strike the law (and similarly there was a 5th Circuit judgement), but just this month the US Supreme Court vacated the Court of Appeals judgement (i.e. reinstated the law) and remanded it back to the respective Court of Appeals. That said, the grounds for doing that were the court had not done the proper analysis, and after they do that it might be struck down again. But for now, the laws are technically not struck down.

    It would be ironic if after conservatives passed this law, and stacked the supreme court and got the challenge to it vacated, the first major use of it was used against Xitter for censoring Harris!



  • Would you say its unfair to base pricing on any attribute of your customer/customer base?

    A business being in a position to be able to implement differential pricing (at least beyond how they divide up their fixed costs) is a sign that something is unfair. The unfairness is not how they implement differential pricing, but that they can do it at all and still have customers.

    YouTube can implement differential pricing because there is a power imbalance between them and consumers - if the consumers want access to a lot of content provided by people other than YouTube through YouTube, YouTube is in a position to say ‘take it or leave it’ about their prices, and consumers do not have another reasonable choice.

    The reason they have this imbalance of market power and can implement differential pricing is because there are significant barriers to entry to compete with YouTube, preventing the emergence of a field of competitors. If anyone on the Internet could easily spin up a clone of YouTube, and charge lower prices for the equivalent service, competitors would pop up and undercut YouTube on pricing.

    The biggest barrier is network effects - YouTube has the most users because they have the most content. They have the most content because people only upload it to them because they have the most users. So this becomes a cycle that helps YouTube and hinders competitors.

    This is a classic case where regulators should step in. Imagine if large video providers were required to federated uploaded content on ActivityPub, and anyone could set up their own YouTube competitor with all the content. The price of the cheapest YouTube clones (which would have all the same content as YouTube) would quickly drop, and no one would have a reason to use YouTube.


  • would not be surprised if regional pricing is pretty much just above the break even mark

    And in the efficient market, that’s how much the service would cost for everyone, because otherwise I could just go to a competitor of YouTube for less, and YouTube would have to lower their pricing to get customers, and so on until no one can lose their prices without losing money.

    Unfortunately, efficient markets are just a neoliberal fantasy. In real life, there are network effects - YouTube has people uploading videos to it because it has the most viewers, and it has the most viewers because it has the most videos. It’s practically impossible for anyone to compete with them effectively because of this, and this is why they can put their prices in some regions up to get more profit. The proper solution is for regulators to step in and require things like data portability (e.g. requiring monopolists to publish videos they receive over open standards like ActivityPub), but regulatory capture makes that unlikely. In a just world, this would happen and their pricing would be close to the costs of running the platform.

    So the people paying higher regional prices are paying money in a just world they shouldn’t have to pay, while those using VPNs to pay less are paying an amount closer to what it should be in a just world. That makes the VPN users people mitigating Google’s abuse, not abusers.


  • Yes, but for companies like Google, the vast majority of systems administration and SRE work is done over the Internet from wherever staff are, not by someone locally (excluding things like physical rack installation or pulling fibre, which is a minority of total effort). And generally the costs of bandwidth and installing hardware is higher in places with a smaller tech industry. For example, when Google on-sells their compute services through GCP (which are likely proportional to costs) they charge about 20% more for an n1-highcpu-2 instance in Mumbai than in Oregon, US.


  • that’s abuse of regional pricing

    More like regional pricing is an attempt to maximise value extraction from consumers to best exploit their near monopoly. The abuse is by Google, and savvy consumers are working around the abuse, and then getting hit by more abuse from Google.

    Regional pricing is done as a way to create differential pricing - all businesses dream of extracting more money from wealthy customers, while still being able to make a profit on less wealthy ones rather than driving them away with high prices. They find various ways to differentiate between wealthy and less wealthy (for example, if you come from a country with a higher average income, if you are using a User-Agent or fingerprint as coming from an expensive phone, and so on), and charge the wealthy more.

    However, you can be assured that they are charging the people they’ve identified as less wealthy (e.g. in a low average income region) more than their marginal cost. Since YouTube is primarily going to be driven by marginal rather than fixed costs (it is very bandwidth and server heavy), and there is no reason to expect users in high-income locations cost YouTube more, it is a safe assumption that the gap between the regional prices is all extra profit.

    High profits are a result of lack of competition - in a competitive market, they wouldn’t exist.

    So all this comes full circle to Google exploiting a non-competitive market.


  • they have ran out of VC money

    You know YouTube is owned by Google, not VC firms right?

    Big companies sometimes keep a division / subsidiary less profitable for a time for a strategic reason, and then tighten the screws.

    They generally only do this if they believe it will eventually be profitable over the long term (or support another part of the strategy so it is profitable overall). Otherwise they would have sold / shut it down earlier - the plan is always going to be to profitable.

    However, while an unprofitable business always means either a plan to tighten screws, or to sell it / shut it down, tightening screws doesn’t mean it is unprofitable. They always want to be more profitable, even if they already are.


  • A1kmm@lemmy.amxl.comtoLinux@lemmy.mlopen letter to the NixOS foundation
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    21
    ·
    5 months ago

    I wonder if this is social engineering along the same vein as the xz takeover? I see a few structural similarities:

    • A lot of pressure being put on a maintainer for reasons that are not particularly obvious what they are all about to an external observer.
    • Anonymous source other than calling themselves KA - so that it can’t be linked to them as a past contributor / it is not possible to find people who actually know the instigator. In the xz case, a whole lot of anonymous personas showed up to put the maintainer under pressure.
    • A major plank of this seems to be attacking a maintainer for “Avoiding giving away authority”. In the xz attack, the attacker sought to get more access and created astroturfed pressure to achieve that ends.
    • It is on a specially allocated domain with full WHOIS privacy, hosted on GitHub on an org with hidden project owners.

    My advice to those attacked here is to keep up the good work on Nix and NixOS, and don’t give in to what could be social engineering trying to manipulate you into acting against the community’s interests.


  • Most of mine are variations of getting confused about what system / device is which:

    • Had two magnetic HDDs connected as my root partitions in RAID-1. One of the drives started getting SATA errors (couldn’t write), so I powered down and disconnected what I thought was the bad disk. Reboot, lots of errors from fsck on boot up, including lots about inodes getting connected to /lost+found. I should have realised at that point that it was a bad idea to rebuild the other good drive from that one. Instead, I ended up restoring from my (fortunately very recent!) backup.
    • I once typed sudo pm-suspend on my laptop because I had an important presentation coming up, and wanted to keep my battery charged. I later noticed my laptop was running low on power (so rushed to find power to charge it), and also that I needed a file from home I’d forgotten to grab. Turns out I was actually in a ssh terminal connected to my home computer that I’d accidentally suspended! This sort of thing is so common that there is a package in some distros (e.g. Debian) called molly-guard specifically to prevent that - I highly recommend it and install it now.
    • I also once thought I was sending a command to a local testing VM, while wiping a database directory for re-installation. Turns out, I typed it in the wrong terminal and sent it to a dev prod environment (i.e. actively used by developers as part of their daily workflow), and we had to scramble to restore it from backup, meanwhile no one could deploy anything.

  • As if telling Reddit, Facebook, or Google what to put on their roadmap as an ordinary consumer would actually work.

    At least with FLOSS if you want something, and if it is a good thing the developers like, you can likely get it merged. If not, you can fork and still have the feature locally. Good luck getting that freedom with a closed-source product.

    For software I develop, I do find it is helpful if people making feature suggestions tell developers what is useful for them and why, but that doesn’t entitle them any of my time to demand what features I prioritise. The alternative is “I gave you something you like for free, so now I owe you to make it even better for you”, which is obviously nonsense.


  • If you have control of the domain, you can also get an X.509 certificate from any CA (e.g. for free from LetsEncrypt). Then you can put up a new server on that domain with a valid cert. If that server supports ActivityPub, it can provide new public keys for private keys you control for all users on the server, and can use the corresponding private keys to sign messages from any user on that server to any community those users are still subscribed to. In addition, any users on other servers still posting to / interacting with communities on that server would cause their server to send that to the inbox on the new server.

    This means any usernames or communities on queer.af should no longer be trusted.


  • more is a legitimate program (it reads a file and writes it out one page at a time), if it is the real more. It is a memory hog in that (unlike the more advanced pager less) it reads the entire file into memory.

    I did an experiment to see if I could get the real more to show similar fds to you. I piped yes "" | head -n10000 >/tmp/test, then ran more < /tmp/test 2>/dev/null. Then I ran ls -l /proc/`pidof more`/fd.

    Results:

    lr-x------ 1 andrew andrew 64 Nov  5 14:56 0 -> /tmp/test
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 1 -> /dev/pts/2
    l-wx------ 1 andrew andrew 64 Nov  5 14:56 2 -> /dev/null
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 3 -> 'anon_inode:[signalfd]'
    

    I think this suggests your open files are probably consistent with the real more when errors are piped to /dev/null. Most likely, you were running something that called more to output something to you (or someone else logged in on a PTY) that had been written to /tmp/RG3tBlTNF8. Next time, you could find the parent of the more process, or look up what else is attached to the same PTS with the fuser command.


  • I use Restic, called from cron, with a password file containing a long randomly generated key.

    I back up with Restic to a repository on a different local hard drive (not part of my main RAID array), with --exclude-caches as well as excluding lots of files that can easily be re-generated / re-installed/ re-downloaded (so my backups are focused on important data). I make sure to include all important data including /etc (and also backup the output of dpkg --get-selections as part of my backup). I auto-prune my repository to apply a policy on how far back I keep (de-duplicated) Restic snapshots.

    Once the backup completes, my script runs du -s on the backup and emails me if it is unexpectedly too big (e.g. I forgot to exclude some new massive file), otherwise it uses rclone sync to sync the archive from the local disk to Backblaze B2.

    I backup my password for B2 (in an encrypted password database) separately, along with the Restic decryption key. Restore procedure is: if the local hard drive is intact, restore with Restic from the last good snapshot on the local repository. If it is also destroyed, rclone sync the archive from Backblaze B2 to local, and then restore from that with Restic.

    Postgres databases I do something different (they aren’t included in my Restic backups, except for config files): I back them up with pgbackrest to Backblaze B2, with archive_mode on and an archive_command to archive WALs to Backblaze. This allows me to do PITR recovery (back to a point in accordance with my pgbackrest retention policy).

    For Docker containers, I create them with docker-compose, and keep the docker-compose.yml so I can easily re-create them. I avoid keeping state in volumes, and instead use volume mounts to a location on the host, and back up the contents for important state (or use PostgreSQL for state instead where the service supports it).