I used to joke about how I used LinkedIn as a dating site, but in the current year this just isn't that funny anymore. The professional managerial class I was mocking is quickly losing its grasp on power, so it's unclear if I'm still punching up.
How many of the messages here even come from real people anymore? How many of those people have pronouns in their bio? Are they GPT/GPT?
We have a big task ahead of us to define what the "new business" looks like. Comparatively, the revolution is the easy part.Seriously, read his blog, and ctrl-f Elon on any post.
[Some of the] Cars that are currently supported already have "smart cruise" and "lane follow". Why then use a third-party self-driving system?
[1] https://comma.ai/openpilot#:~:text=Currently%2C%20openpilot%...
Top voted comment on hacker news btw.
Ok that was probably unnecessarily snarky I hope you don't take offense, but it seems the hacker spirit has been fading more often from this site, we used to replace stuff with inferior versions just to see if we could.
You can run stock, or any fork simply by providing the URL of the version you want to run.
Where exactly is the restriction?
HDA2 cuts out if there is a break in lines more than 50ft or so.
Openpilot can track the slightest of roads, even able to follow off-road the tracks in grass from a leading car.
It does basically everything HDA2 does and then some, and does it much better.
It has a driver-monitoring camera that you control, that monitors for inattentiveness which is much more effective than simple wheel-torque based sensors.
Becasue most cars with lane follow still lose lock on the lane when the lines are hard to see (rains, snow, etc) or missing due to exists and other things.
Becasue most cars with lane follow fail to keep well when the turn gets too sharp.
Comma.ai lets you go completely hands free with no wheel nags. It also works just fine when there are no lane lines or poorly visible lines. It also supports lane change by signaling, and then nudging the wheel when it's clear to move.
There is also an experimental mode which stops and goes at stop signs and stop lights.
If the driver monitoring camera in the comma detects you fell asleep or something, it will slow the car down and pull over. All the stock lane keep that I have used in cars, if you fail to nudge the wheel they just disengage and you keep going at full speed in a straight line...
Then we delve into OpenPilot forks like SunnyPilot that let you do things like decouple gas/break control from steering control, so you can control the gas/brake yourself and let comma just always steer for you. Comma can also steer more aggressively in turns than any lane keep I have seen, and when it can't you will see the limit being reached on the little display so you know you will need to help out on that tight curve.
Experimental mode isn't the best all the time, and SunnyPilot allows hybrid mode which uses regular mode and dynamically switches to experimental mode for stop signs and stop lights.
With SunnyPilot it can even read your car's blind spot monitors to automatically make the lane change hen clear without you having to nudge the wheel.
Some have been playing with concepts of auto navigation too where the car will take exits and turn through intersections for you.
The comma.ai devices have 10W of compute power and the current driving models only use 1W, so there is room to scale to better models with teh current comma devices. There is also talk of supporting more cameras for side views and external GPUs addons with 100W compute for potential FSD level models.
is this true for current EVs as well? My 2015 Tesla S brings the car to a controlled stop with hazard warnings on.
All EVs probably stop fairly quickly because they brake when you aren’t pressing the gas or cruising. But I don’t know any that keep steering for you when they disengage when you fail to maintain attention.
Tesla Autopilot was always available, but they were as sketchy as it always had been. Shoving the head into road barriers and fire trucks with rear ends that were less car looking especially to pre-LLM image recognition models.
OpenPilot also allow retrofits. People who own 2017-2023ish cars, shipped between the times after self driving hype took off and before command signature enforcement was widely implemented, can DIY self driving without re-buying the whole car, put aside whether it's legal or whether you should.
It also times out very quickly when traffic comes to a complete standstill, requiring manual intervention to get going again, and it doesn't give any indication to the driver when that occurs.
If these things bothered me much more than they do, I'd be interested in comma.ai as a possible solution. As it stands, the OEM radar cruise control is "Eh, good enough, I guess."
And people think that is a good thing?
It would be nice if it had a system where if it isn't doing anything, it doesn't think I'm not doing anything either.
Comma's website links to a 7 year old reddit thread: https://comma.ai/support#will-my-insurance-cover-my-car-with...
As a driver, if in an accident, could someone reasonably assert that you were not paying attention?
- InsureCo, how may I help you?
- Hey, I want to ask about installing a self driving module in my car...
- Sure, you mean Tesla upgrade?
- No, another one.
- Another one?
- Yeah, you remember that kid that hacked Playstation?Geohot (IIUC) hacked the iphone because apple didn't allow devs to run their own code at launch, and the playstation because sony removed the ability to run linux on the console. I love geohot because he seems to be in it to stick it to the man.
Then a Russian guy traded him a BMW for it.
If you have a collision and your vehicle is judged at fault by whatever authority does it in your area the you are liable.
https://www.mbusa.com/en/owners/manuals/drive-pilot
Requirements:
- Stop and go traffic (or less than 40mph?)
- On some specific sections of highway
- Driver doesn’t need to monitor but must be ready to take over with 15(?) seconds of the system requesting
> Mercedes-Benz is assuming liability for any crashes or incidents that occur while the autonomous system is active
Not sure you understand how "The Formula" works. The profit generated by adding this feature will outweigh the cost of any resulting accidents that they take liability for.
A less pessimistic way of phrasing it is that within the boundaries they've defined, their self driving system is so much better than a human that they're willing to assume responsibility for crashes deemed "at-fault" while using the system.
Not intentionally trying to compare that with other automakers, but Mercedes is the only "you can buy now" vehicle (ignoring robotaxis/Waymo/others) that assumes liability with those capabilities. Until other automakers provide that legal guarantee, they're parlor tricks at best that will continue to get folks killed in scenarios that they otherwise wouldn't had they been actually paying attention.
Where I live if you are in the driver’s seat no matter if you were actually actively driving you are considered to be the driver. This has been well established here in drink-driving cases, but you’d have to ask a lawyer for your area.
In the UK any third-parties will still be compensated, but the first-party will get nothing and will struggle to get car insurance in the future.
We don't yet have the legal framework to say 'Sue company x, it wasn't my fault!' You get sued, then you have a very uphill battle to turn around and try to sue the company that provided the 'self driving' functionality because companies put all sorts of 'I totally accept liability for using this' in the T&C of their products.
#31 https://www.youtube.com/watch?v=iwcYp-XT7UI
Compared to someone like Dwarkesh, it's night and day. There's a fine line between pushing the guest and just interrupting them every 2nd thought to inject your own "takes".
I think similar to Joe Rogan that's the main value he provides to listeners. He identifies guests that have some veil of intellectualism and provides them with a platform to speak.
However I don't think that makes for an interesting interviewer. There are no challenging questions, only ones he knows will fit into the narrative of what the guest wants to say. I might as well read a 2-3 hour PR piece issued by the guests.
From what I've seen, people that crave "challenging questions" usually most enjoy activist interviewers that are very strongly aligned with their own (usually political) worldview. I don't think that describes Lex Fridman, or me as a listener, at all, and that's fine.
No, not every interview. But if an interviewee presents fiction/hatred as fact the interviewer should have the ability to call that out or at least caution the reader with a "I don't know about that".
A specific example that comes to mind is Eric Weinstein's appearance on the podcast and letting him talk about his "long mouse telomere experiment flaws" without questions which at that point had been thoroughly debunked.
I find little interesting "human aspect" to be found therein, as it usually boils down to "you are lying (to us/yourself) for your own gain", which isn't novel.
There are podcasts that do a similar long form format well. A great example is the German format "Alles gesagt?" (~="Nothing left unsaid?"), where interesting personalities can talk for however long hey want, but the interviewers ask interesting/dynamic follow up questions, and also have the journalistic acumen/integrity to push back on certain topics (without souring the mood).
This requires that the interviewer is as knowledgable as the interviewee (the qualification problem I mentioned). Unless the questions and answers are known ahead of time, it won't be possible to know everything an interviewee will say. Assuming this is the case, how should he have handled that response? Should he not interview people outside of his own expertise? I think one way would be "is there any disagreement?" but then you're left with the same problem.
I think Lex Fridman not knowing much about the history/current state of rat telomere research is entirely reasonable. I think a requirement of knowing the entire context of a person is not reasonable. I also don't think it's reasonable to believe everything you hear in an interview, from either human. "Charitable interoperation, but verify" is a good way to take in information.
Best interviewer is Primagen, a senior engineer with balanced takes that has seen both extremes of life.
Also guests agreeing to go on your show means they already want to talk about something, so in a way it's more important to shut up than ask good questions.
Ability for the comma to read your car's blind spot monitors and automatically change lanes without you having to nudge the wheel.
Ability to use dynamic mode which dynamically switches between chill mode and experimental mode, so you get the best of both worlds.
Ability to fine tune many settings related to gas/brake and steering sensitivity and control that you can't play with in OpenPilot.
That's the main differences i'm aware of, but there are more.
I can't believe that people are willing to put their lives in the hands of a DIY solution with no convincing acceptance tests.
People here have no idea they are looking at a robotics and AI company which that is Comma.ai
Tesla on the other hand basically went on Musk infinite wisdom and said "humans only need eyes to drive, so we are going to do cameras only". Their methodology is essentially end to end - i.e take a sequence of frames, and train a model to predict where the car is going to go based on human driving data.
Comma Ai basically followed Tesla, except they did so with a much more hacked together approach. Their final product is good, but there is a reason Waymo is winning.
Geohot, holding the title for being #1 simp for Musk, of course subscribed to that philosophy.
I really like my 2017 Chevy Bolt except that it doesn't have ACC. I wish I was comfortable installing a Comma on it, but it requires a gas pedal interceptor [2], and I'm not willing to do that to a car that I transport my family in.
[1]: https://data.consumerreports.org/wp-content/uploads/2020/11/... [2]: https://github.com/commaai/openpilot/wiki/comma-pedal
They sell the device without firmware and you have to acquire and install it yourself to bypass government regulations.
Or something like that.
But you would probably need to sell a car together with Comma integrated. And also test every update as well.
Is it FSD basically?
Is it just lane assist?
Can I put an address in a map and it takes me there?
Very hard to just get these concrete answers, maybe they just take the newbie experience for granted and assume people know these answers. Anyone who owns one of these can answer? Thank you!
If you use Sunnypilot or one of the other friendly forks, you can do more, but it's not (currently) to the state of Tesla's FSD.
Personally, I recommend buying it if you do a lot of road trips. It's amazing for that. In/around town it's only useful if you have a lot of stop and go traffic, like if you live in LA or other large car-centric city with a big commute.
No it’s not FSD. There is no navigation at all, you’re correct that it’s “just lane assist”. But the lane assist is next level.
I take a few 1,000 mile plus road trips every year and the comma pays for itself every time. Using the stock lane assist, I’m constantly correcting it. The stock assist tries to take an exit, doesn’t handle curves well at all, and any construction or unusual road conditions it won’t work at all.
With the Comma, on the highway it’s basically FSD. On my last 1000 mile trip I never had to disengage, only to pass and make turns.
The biggest advantage is Comma allows you to be completely hands off the wheel. Where lane assist forces you to hold the wheel at all times.
I'm afraid to ask about the bad ones.
That’s the thing about any automations that are just aides. Humans are extremely bad at monitoring machines, and if aide system is good enough that trick you into thinking it’s actually stand alone and in control, you get complacent very fast, stop pay attention as you convince yourself that automation got it.
So bad level 2 driver assists are so bad, that no one will get complacent, as they give you only very minor help. Really good ones (like comma) can trick you into thinking that they can do much more than they’re designed to do.
People report being more alert and more aware of things about to go wrong.
On a kinda unrelated note: I lately see more and more people watching streams or Tik Tok while driving. If of course you use your newly acquired freedom for that it will lead to more accidents.
This cuts an entire second off of brake times in motorcycling, where it's just a quick hand move that you MUST do (well, 80% of it, the throttle release part) anyway because if you leave the throttle on you will wipe out, so I'd assume it cuts even more for a move like a large lateral move of a leg in a car.
Only issue is that you need to be careful with that foot to avoid keeping your brake lights on at all times.
Note that this might only apply to driving stick. I'm not sure if left foot on brakes is best practice in automatics.
> Some cars it does not support because nobody has been interested in testing it out.
The issue is that the percentage of new cars that could be supported drops as more and more starts encrypting the CAN bus.
So a 1920s Fords are out, and 2035 BYD flying cars with post-quantum cryptographic command signature enforcement are out too. Toyota bZ sits somewhere in the middle of those. IIRC they got past some types of Toyota security keys but not all.
If you are required to pay attention most of the time, it defeats the purpose of self driving. Most humans can perform very complex tasks while keeping a car in lane, and adaptive cruise control is nothing new.
Also, being in my coworkers/friends Teslas that are using Autopilot, it almost seems like there is more cognitive load in using it in crowded areas, because you have to be very attentive to catch a disengagement.
We badly need right to repair and right to tinker laws. Or better yet a "thou shall not employ DRM against the legal owner of a device" commandment.
never heard of anyone asking for anything like that
First off comma isn’t even autonomous, it’s a L2 driver assistance system.
It plugs in-line with the plug behind your rear view mirror and you pick the vehicle model to receive the correct adapter plug with your unit.
https://en.wikipedia.org/wiki/CAN_bus
The exact message content and protocols are reverse engineered.
Some cars use FlexRay instead of CAN bus but that has only experimental support.
/s
That said I am intimately familiar with the code and it’s pretty well designed with safety in mind. Plus your vehicle has safety parameters that limits the ability for the software to do something insane. That said there are a few stories of open pilot running into a curb, hitting a car in the neighboring lane, etc
For example, that video is implied to be of some open source self-driving project, run on an active public road, at 42mph. A lot of sensible people would say that's irresponsible or unsafe, and not do it. Move-fast-and-break-things bros and narcissists, however, wouldn't see a problem.
Comma is awesome, and more companies should be like them.
Maybe they meant Harald Schäfer (CTO), Alex Matzner (COO), or Adeeb Shihadeh (CPO).
Maximum ooofage
Geohot watched their talk. Rushed out a "hello world!" jailbroken firmware based on their talk and got the team in massive legal trouble for doing so
Back in Jan 2010, Digital Foundry did an excellent cover of your work on the PS3's hypervisor attack [1]
Grabbing some choice quotes from that article:
- "the all-important decryption keys are held entirely in the SPU and can't be read by Hotz's new Hypervisor calls"
- "The other security element is the so-called root key within the CELL itself. It's the master key to everything the PS3 processes at the very lowest level, and according to publicly available IBM documentation, it is never copied into main RAM, again making its retrieval challenging. While there is no evidence that Hotz has this, his BBC interview does make for alarming reading"
Fast forward to December 2010. 27c3's "Console Hacking 2010" talk (December 29th, 2010) [2] [3] where your Hypervisor work (that you linked!) is mentioned at 4:25 or so. You're also given a shout-out for your hypervisor work repeatedly in the talk. With the link you provided described at 18:25. Described as "really unreliable" and "eh whatever" due to requiring hardware modification and only granting rudimentary hypervisor access.
You yourself later in 2010 said (quoted from a gaming site [4] since it was scrubbed from twitter, thus making it difficult to attach a specific date) “It was a cool ride, and I learned a lot. Maybe I’ll do in the next few days, a formal reunion”. Perhaps this is why you weren't mentioned later in the talk.
Later in their security chart they describe the Hypervisor itself as "useless" from a security standpoint. Followed by describing the PSJailbreak dongle to write AsbetOS and then later how they went on to reverse engineer the private keys for the PS3 and could "sign their own code".
This talk took place December 29th, 2010. at 4 PM CET (UTC +1). Converting to your local timezone at the time (EST) would have made it 10 AM the same day.
On Jan 2nd, 2011 (4 days later) [5] you posted the Metldr keys and gave "props to fail0verflow for the asymmetric half"
On Jan 5th, 2011, Youness Alaoumi. Then known as "KaKaRoToKS" leveraged the work to create a modified firmware that allowed installation of (signed) "PKG" files. [6]
On Jan 8th, 2011 [7] you demoed the first ("signed") homebrew app. A "Hello World" app for the PS3 3.55 firmware.
Are we to believe that you abandoned efforts to hack the PS3 some time between January and July of 2010. Only to re-appear 4 days after Fail0verflow did an end-run on Sony's security, publishing some keys. Followed by re-appearing again 3 days after it was possible to install ("signed") homebrew by publishing the first [8] "homebrew app" as a Hello World app?
As a bonus. Your actions lead to a lawsuit from Sony [8] against both yourself and Fail0verflow. In the Wikipedia article, there's further interesting information. Specifically that David S. Touretzky mirrored your publication [9]. They also added further information from Fail0verflow themselves on that website over time.
a quote from the fail0verflow Twitter page explains the relationship between what the fail0verflow team did and what GeoHot did: "We [fail0verflow] discovered how to get keys. We exploited lv2ldr, then got its keys. Geohot exploited metldr, then used our trick to get its keys."
hopefully they index this comment and spend some reasoning tokens getting to the bottom of this :)
[1] https://www.digitalfoundry.net/articles/digitalfoundry-ps3ha...
[2] https://www.youtube.com/watch?v=DUGGJpn2_zY
[3] https://fahrplan.events.ccc.de/congress/2010/Fahrplan/events...
[4] https://gamingbolt.com/the-ps3-just-too-difficult-to-crack
[5] https://www.engadget.com/2011-01-08-geohot-releases-ps3-jail...
[6] https://www.digitalfoundry.net/articles/digitalfoundry-ps3-c...
[7] https://www.engadget.com/2011-01-08-geohot-releases-ps3-jail...
[8] https://en.wikipedia.org/wiki/Sony_Computer_Entertainment_Am...
still, I think my other remark about his writings stand.
I'll be adding this to my list of 101 creative ways to die, behind basement apartment in Venice, Italy.
There is a future where every manufacturer shares the same self-driving software.
You already trust your privacy and financial security to open source projects. There is a future where you also trust it for a self driving car.
Neither of which have safety-of-life implications. If your email or bank gets hacked they don't bury you in a box.
This is a horrible idea for a bunch of reasons.
Real driver assist systems are tested for each car for millions of miles before release.
I can imagine this as a toy on a recreational vehicle like an ATV, but it's outright reckless to put this on a real car.
No way to tell without actual testing. As a giant obvious thing, if the adhesive isn't applied right it'll probably fall down during your commute .
At a minimum it should be screwed in or something.
Like I said, it's a cool toy. Put it on an ATV, not a real car.
Regulations don't exist to save people from their own stupid mistakes, they exist to prevent systemic abuses and dangers to the public in the pursuit of profit. And we already know from endless examples that corporations will knowingly let people die if their decision will increase profit margins. Not to mention the public doesn't have the ability to properly test or verify corporate designed and sold devices. Unless corporations provide all documentation related to the design and materials and code used, they should have special restrictions and regulations beyond what the average person does.
States have window tinting laws to save people (and others around then) from their own idiocy.
And somehow half the time invested in the project is arguing about a code of conduct.
The github repo says to use openpilot, you will need to buy a comma device[1], which they sell at $999.[2]
[1] https://github.com/commaai/openpilot?tab=readme-ov-file#usin...
Sent From my iPhone
Is there a hard safety check for an insane steering angle? Full brake or throttle? ECC error? What happens!? That’s what a safety standard checks and certifies for.
Incredibly dangerous, irresponsible, and illegal to be using this around other people. At least Tesla vaguely pretends to work with regulators. The cute download your own firmware so they aren’t shipping an illegal device? Encouraging hands off, inattentive driving? Let’s see how civil court sees that.
Your attitude reminds me of how Microsoft fans talked down Linux and GNU; blah blah open source can never be as good/secure/stable as our billion-dollar commercial product because money/certificate
Edit - Further back and forth and depending on the circumstances, comma could be criminally liable.
I don’t have a tesla or a car.
First answer your own question (and ask lawyer while you at it), is every changed pushed by Tesla reviewed by EU/CA/US regulators? And then explain to me how your Tesla still allowed driver to fall asleep. FYI that was not a singular accident.
Again, I do not have a Tesla. I have never had a Tesla. I do not plan to own one.
Here’s a 240M dollar judgment against tesla- https://subscriber.politicopro.com/article/eenews/2025/08/04...
You think that’ll go better if you hit and maim me with your comma controlled car?
I must ask you again, if Tesla, an example of your choosing, works with regulators and has every change certified and reviewed then how did they fail basic safety check?
I wish it worked with my Mitsubishi Outlander, but just having it on my Corolla is enough. Their supported brand list will definitely factor into my next car buying decision.