Yeah. Shockingly people store things where it is convenient to have them. :) I’m glad I didn’t have a keyless system to with about.
Yeah. Shockingly people store things where it is convenient to have them. :) I’m glad I didn’t have a keyless system to with about.
I did read the article. I’m unfamiliar with the “hacking” tools or methods they mention given they use terms like emulator. I was simply sharing one wireless attack that is common in certain areas and why.
I think most of the wireless attacks aren’t trying to be so sophisticated. They target cars parked at home and use a relay attack that uses a repeater antenna to rebroadcast the signal from the car to the fob inside and vice versa, tricking the car into thinking the fob is nearby. Canada has seen a large spike in this kind of attack. Faraday pouches that you put the fob inside of at home mitigates the attack.
I’m not sure about what the article is referencing, which is probably a little more exotic, but relay attacks are very common against keyless cars. Keyless cars are constantly pinging for their matching fob. A relay attack just involves a repeater antenna held outside the car that repeats the signal between the car and the fob inside the house. Since many people leave the fob near the front of the house, it works and allows thieves to enter and start the car. Canada has has a big problem with car thieves using relay attacks to then drive cars into shipping containers and then sell them overseas.
With coffee all things heart palpations are possible. It took me about a year and a half between work and studies. Definitely not a day. 😀
That’s awesome, but no, they made something far more useful, lol. I’m glad to see projects like that though; it’s a lost art!
Years and years ago I built my own 16 bit computer from the nand gates up. ALU, etc, all built from scratch. Wrote the assembler, then wrote a compiler for a lightweight object oriented language. Built the OS, network stack, etc. At the end of the day I had a really neat, absolutely useless computer. The knowledge was what I wanted, not a usable computer.
Building something actually useful, and modern takes so much more work. I could never even make a dent in the hour, max, I have a day outside of work and family. Plus, I worked in technology for 25 years, ended as director of engineering before fully leaving tech behind and taking a leadership position.
I’ve done so much tech work. I’m ready to spend my down time in nature, and watching birds, and skiing.
The article says that steam showing a notice on snap installs that it isn’t an official package and to report errors to snap would be extreme. But that seems pretty reasonable to me, especially since the small package doesn’t include that in its own description. Is there any reason why that would be considered extreme, in the face of higher than normal error rates with the package, and lack of appropriate package description?
Thanks for the article, it was a fun read. I’ll have to go back and re-read the majority opinion because I do remember some interesting analysis on it even if I disagree with the outcome.
While not related from a legal standpoint, the use of iPhones and intermediate devices reminds me of a supreme Court case that I wrote a brief about. The crux of it was a steaming service that operated large arrays of micro antenna to pick up over the air content and offer it as streaming services to customers. They uniquely associated individual customers with streams from individual antenna so they could argue that they were not copying the material but merely transmitting it.
I forget the details, but ultimately I believe they lost. It was an interesting case.
I use a terminal whenever I’m doing work that I want to automate, is the only way to do something such as certain parameters being cli only, or when using a GUI would require additional software I don’t otherwise want.
I play games and generally do rec time in a GUI, but I do all my git and docker work from the cli.
Since everyone else gave a joke answer I’ll take a stab in the dark and say the upper limits would be the availability of hydrogen and physical limitations in transforming heat output into electricity. The hydrogen is the most common element but 96% of it is currently produced from fossil fuels. After that, it would be how well you can scale up turbines to efficiently convert heat to electricity.
Part of that is the racket that is software licensing for mainframes. Many vendors like CA7 charge based on the machines computational capacity. You can introduce soft limits or send usage reports, but not all vendors accept that to lower your price. Super expensive software costs, at least back when I worked on zOS.
Right, Google isn’t one to trust. So paid services and clear data handling practices.
I would say don’t trust free services in general. There are plenty of paid service providers that handle your data well.
That’s a good takeaway. AWS is the ultimate Swiss army knife, but it is easy to misconfigure. Personally, when you are first learning AWS, I wouldn’t put more data in then you are willing to pay for on the most expensive tier. AWS also gives you options to set price alerts, so if you do start playing with it, spend the time to set cost alerts so you know when something is going awry.
Have a great day!
So you just asked the most confusing thing about AWS service names due to how names changed over time.
Before S3 had an archival tier, there existed a separate service that AWS named AWS Glacier Storage, and then renamed to AWS S3 Glacier.
Around 2012 AWS started adding tiers to S3 which made the standalone service redundant. I received you look at S3 proper unless you have something like a Synology that can directly integrate with the older job based API used by the original glacier service.
So, let’s say I have a 1TB archival file, single tarball, and I upload it to a brand new S3 bucket, without version, special features, etc, except it has a life cycle policy to move objects from S3 standard to S3 Glacier instant access after 0 days. So effectively, I upload the file and it moves to Glacier class storage.
The S3 standard is ~$24/tb/month, and lets say worst case scenario our data sits on standard for one whole day before moving.
$0.77+$0.005 (API cost of the put)
Then there is the lifecycle charge to move the data from standard to glacier, with one request per object each way. Since we only have one object the cost is
$0.004 out of standard
$0.02 into glacier
The cost of glacier instant tier is $4.1/tb/month. Since we would be there all but one day, the cost on the first bill would be:
$3.95
The second month onwards you would pay just the $4.1/month unless you are constantly adding or removing.
Let’s say six months later you download your 1tb archive file. That would incur a cost of up to $30.
Now I know that seems complicated and expensive. It is, because it is providing services to me in my former role as director of engineering, with complex needs and budgets to pay for stuff. It doesn’t make sense as a large-scale backup of personal data, unless you also want to leverage other AWS services, or you are truly just dumping the data away and will likely never need to retrieve it.
S3 is great for complying with HIPAA, feeding data into a cdn, and generally dumping data around in performant way. I’ve literally dropped a petabyte off data into S3 and it just took it and did its thing.
In my personal AWS account I use S3 as a place to dump cache contents built by lambda functions and served up by API gateway. Doing stuff like that is super cheap. I also use private git repos (code commit), private container registry (ecr), and container host (ECS), and it is nice have all of that stuff just click together.
For backing up my personal computer, I use iDrive personal and OneDrive, where I don’t have to worry about the cost per object, etc. iDrive (not an Apple service) let’s you backup multiple devices to their platform and keeps them versioned.
Anyway, happy to help answer questions. Have a great day.
Just because they don’t issue a bill doesn’t mean they don’t track costs. They track labor, labor rates, and consumables.
That said, this particular treatment is very involved. They harvest cells over multiple periods, send them to a lab to be modified, and when they are ready they do chemotherapy to kill your immune system, then do a bone marrow transplant to introduce the modified cells, and then you have to be in isolation in a hospital until your immune system comes back. Even the best facilities are saying they can only do 5-10 of these per year.
Pretty crazy.
Thanks for posting. I just deployed to my container host in AWS ECS and it’s working well in my testing. Very easy deployment with docker.
Hey, sorry it took so long to see your question. Here is a paper (PDF) on the subject with diagrams.
https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/42365/eth-4572-01.pdf
Edit: and here is a times article that covers the problem in one area. https://www.nytimes.com/2024/02/24/world/canada/toronto-car-theft-epidemic.html