I recently added self-attention to a network that I trained to detect walls and it improved the Dice score for wall-segementation. I am writing this short article to summarize self-attention in CNNs. I write these notes primarily so that I can come back to them and recall what I did, but I hope you find it useful too.
Self-attention is described in this article. It increases the receptive field of the CNN without adding computational cost associated with very large kernel sizes.
where, C is the number of channels and N…
You can pre-condition your Tesla by adding a top-off charge to it. Pre-conditioning increases efficiency and enables regenerative braking which also contributes to better efficiency.
If you have a regular schedule, you can simply use the Smart Battery Prep feature of Stats app and you can skip this article. If your schedule is not regular, you may find this article useful. This articles assumes that you have Stats app and the Shortcuts app (by Apple) installed.
Thanks to Chase Michael who motivated writing this article (after our chat on Twitter)
You can automate pre-conditioning by performing the following two steps:
Yes, it sounds like magic, but in iOS 14 Apple uses the magic of machine learning and internal inertial sensors to detect a triple tap (or double-tap) on the back of the iPhone and allow you to associate an action to it.
I use it to interact with my Tesla by associating the triple-tap with shortcuts that my Stats for Tesla app provides (more on this at the end of this article).
Here is how to enable this feature:
1. Launch the Settings app and select Accessibility and then tap on “Touch”
What if you want to add a button to your home screen that opens the trunk/frunk? Such a single-action button may be convenient when you just want to do one thing and you don’t want to launch an app and wait for it to connect to the car and then tap on the button that does the action that you want.
The alternative is to use Siri or use the Widget. These are perfectly fine alternatives, but sometimes you can’t talk (eg, in a meeting) or the action that you want does not have a button in the Widget.
There are numerous use-cases where we need to perform classification where some of the classes have very few examples in our training set.
The following CT Scan image is an example of a case where the size of abnormality in prostate is much less than the size of normal tissue (hence, the rare class situation).
A typical (and necessarily) technique to handle rare classes is to apply a wight (e.g., proportional to the percentage of each class in training set) to the loss associated with each class. But maybe we can do better. I recently came across a paper …
In the past few years, many high-end car makers have started adding a head-up display (HUD) to their cars. BMW and Mercedes are among such car makers. The HUD projects information related to the car on the windshield so that the driver can galnce it without taking their eyes off the road.
The following image shows the information that some BMW cars project on the windshield of the car.
As a Tesla owner, you have probably done the simple math of dividing the current Rated range of your car by the battery percentage to compute the maximum range and noticed that it is different than the nominal rated range when you first purchased the car.
This article attempts to address some of your questions related to maximum range provided by the car.
Rated Range is the range that the car is expected to provide in nominal conditions set by a government agency (e.g., EPA). This range is a function of battery capacity. …
Viewing Sentry videos recorded by your Tesla on iPhone/iPad is easy, but it can be a bit confusing first.
There are a few ways to do this and this article attempts to list the most convenient ways of doing so.
In this solution, you need the following:
To make this solution work, just format the iXpand drive as FAT32 (this is a Tesla requirement), add a TeslaCam folder (it’s case-sensitive) to the drive.
When formatting the iXpand drive, please following the steps shows in the screenshot below. …
I’m designing a network for MRI segmentation (more on this in a future post) with a relatively large training set. Each epoch takes about 45 minutes to run (on my brand new Nvidia 1080 TI card) and convergence takes about ten epochs. I wanted to be notified of the progress as each epoch finishes and get the precision metric for that epoch so that if the precision number does not converge as I expect it, I can change some hyper-parameters or model structure. I did not want to sit in front of the computer while training progresses.
This post is…