Some Thoughts on Smart Canes

over the last few years, many attempts have been made at creating a smart cane. None of them have successfully lead to a market transforming technology that's actually used by any substantial users, and most of them are simply inspiration porn. I saw a recent example of a smart cane getting news coverage, and it deserves particular attention because of a particularly egregious argument used within. I've seen this argument, or variations therein, made in several posts about smart canes. I will address this below, and lay out why this argument does not present a solid case in favor of a smart cane. I will then lay out several design and engineering constraints that must be met before I would ever be able to recommend a smart cane to another blind person.

argument structure and example

“We’ve come to a world where we talk about autonomous vehicles and yet we’re still sending visually impaired people out with what is essentially a stick,” Feghali said. “It doesn’t take you anywhere. It doesn’t take you to a coffee shop. It doesn’t help you seek employment. It’s just a stick.”

Another similar argument that is often presented is functionally equivalent, but (wrongly) mentions the fact that a cane has not changed in almost a hundred years and still is just a stick.


There are literally thousands of examples of objects that are "just a stick", or in similar fashion, just a piece of metal, or similarly primitive. For example, I often times will grab a wooden spoon if I'm stirring something on the stove, or use a pot, with a simple design that has no fancy bells and whistles and does not need a bluetooth connection or have its own heat regulation. The knives in my kitchen resemble primitive metal objects, (although they are far from primitive and have subtle design elements that engineers spent thousands of hours refining). I eat food with a fork, which is just a chunk of metal. The knives in my kitchen don't cut onions for me, or help me make sure my bread slices are all equally thin. I've never once thought wow, a smart knife could help me slice potatoes more efficiently, because if I want to slice potatoes more efficiently, I'll grab a tool designed by smart engineers like a mandolin, food-processor, etc. Trying to make a knife smart would defeat the purpose of a simple and versatile tool. A pencil is "just a stick" with some graphite in the middle, and hasn't been seriously refined in decades. Where's my smart pencil that measures my pencil strokes and vibrates if my handwriting is sloppy? Right, the market doesn't want such junk. Mason jars, with there compression rings and seals, have had the same design for decades, and aren't being called primitive by the media. Hikers use wooden, fiberglass, carbon fiber, or metal sticks to balance and the poles do not have gps built into them. Any company trying to build a hiking pole with a gps would get laughed at, because that would needlessly increase the weight of the pole, which is why materials like carbon fiber are becoming so beloved in hiking poles. Hiking poles don't help hikers get jobs, nor do hiking poles offer any less advantage in a world filled with autonomous cars. A hiking pole in the wilderness won't help you become unlost if you don't have a map and compass, and whether autonomous cars exist is irrelevant to whether hiking poles, with their simple multiple millennia spanning design, are useful in their current form. Just like the tools listed above, the blind persons cane has lasted for a century in its current form, and thousands of years prior actually with slightly less refined designs, because it works. It's that simple. The original engineers who decided to improve upon a stick to come up with a cane optimized the design so well that further refinements were relegated to small updates to the existing design as new materials became available. A radically different cane or guiding implement has never materialized, because there aren't any problems that need solved in regards to finding objects nearby, besides objects above nee height. This is the only reasonable place where a piece of smart tech could be used on the cane.

Ramblings about cane designs

Blind peoples canes aren't just a stick. Any reputable cane manufactured in the last 2 decades has a handle built into it and a tip that can be replaced, so the cane performs optimally and can be adapted for different environments. Cane tips come in many different types of materials. Ceramic tips are super hard, take several years or more to wear down, are damn near indestructible, except in Canada where they freeze-thaw break. Nylon or UHMW roller tips are great when sliding a cane, and some even are designed for cobbles. Pencil tips are nice when trying to tap, because they are light, and marshmallow tips prevent obsessive cussing because they glide over cracks in walkways, instead of sending the cane into the operators gut. These tips are all replaceable, for one simple reason. The end of a stick doesn't last forever. Additionally, many white canes have reflective tape on them, which reflects lights from vehicles so that a blind person walking at night is more visible. Many people prefer folding or teliscoping canes. Folding canes often times have a lot of design elements, such as joints that are capable of being tapped for hours on end, roled around causing all sorts of jostling, and finally folded up into something no longer than a foot in less than 5 seconds. The joints can't get stuck, because people will not be happy when a blind person can't get their cane folded up and its in the isle of a bus or train. Modern materials, such as carbon fiber, are great for canes, because they have lowered the weight of the cane, making them much easier to move quickly. Additionally, carbon fiber canes do a fantastic job at conveying the feel of the ground, suchh as what material the surface is using, subtle differences in asphalt texture, changes in elevation, bumps, or lines at a street corner, etc. Heavier materials like aluminum or steel aren't as suited for this purpose. Carbon fiber bends elastically, far more than many other materials, and so carbon fiber canes don't end up bent as often. Canes that are used in cities get run over by notoriously rude bikers and pedestrians, and shouldn't break. If a cane breaks and the person using it isn't carrying a backup, well ... they obviously haven't taken the age old advice of carying a backup cane seriously. It will only be a matter of time until they learn their lesson the hard way. That's why some folding cane designs, such as these Ambutech folding lightweight graphite canes have engineered failures. Forces applied at a perpendicular angle to the cane cause an engineered failure and fold a segment of the cane, which prevents a break in almost all cases. Meanwhile, forces that occur under normal use, such as forces applied on the same, or a similar plane to the cane itself will do absolutely nothing to the cane. I've literally had a pedestrian jump over my cane instead of saying excuse me, fail, and end up splitting it with their legs. Instead of breaking, or bending significantly, the cane will almost always just release a segment as if I tried to fold it. I then just snap it back together like nothing happened. This is also a great way to amuse myself, because the rude pedestrian thinks they've actually broken my cane and are really concerned, and I just snap it back together, and walk away as if nothing happened. a cane is more than just a stick, its a stick with many suttle refinements that have been made, without changing the basic technology. The engineering refinements that have been made over the decades are so well done that the average user, who isn't constantly analyzing engineered objects and wondering what subtle things were designed, won't even realize serious effort was placed into ensuring some set of constraints was optimized.

Things a smart cane would need.

In order for a smart cane to be usable, it needs to be light enough that I can move it back and forth, and it needs to move quick enough that every time I take a step it can move across my entire body. Heavier canes become more annoying to do this with, and I end up walking slower, or becoming annoyed when I have to push it through snow or make it avoid someone. If the lower end of a smart cane is broken, I'm not paying infinity for a new cane, and realistically the government isn't nor shouldn't either. I'm expecting the smart tech to b able to be swapped onto a new cheap cane, and I'm expecting the non-smart bit to be collapsable so I can stoe a spare one in my bag in case I encounter a bear (um ... I mean rude cyclist). In reality, my cane is going to be used in many scenarios, some of which are far from ideal for electronic components. I have used my cane in full on blizzards, rain-snow mix, and full on summer downpores where literally everything is soaking. I've walked several miles in weather cold enough to become frost bitten in minutes, and I can't aford to take my gloves off to operate a touch screen. When I was in college, don't go out in that kind of weather simply was not an option, so the cane's water resistance better be good enough to take a full on wet onslaught for several hours. I better be able to take it from -10F into a 70F building without it caring. I also don't want my cane pulling an apple and shutting down whenever it gets cold. I guess it'd be even more special if it acted like an Apple device, since Apple devices like to claim they're too hot when they shut down at the cold end of their range. I also don't want it talking to me when I'm trying to pay attention to where vehicles are when crossing a street, so having a shh mode might be useful. Or better yet, the shut-up mode should be the default and I should have to tell it to talk. This is very critical. I've seen way to many devices designed "for the blind" that talk at infinity decibels in a really slow voice. I wanna see the designers cross a street with 6 lanes of traffic, one lane of which can turn left, and a jack hammer occuring in the background. Then, they'll do this while having someone scream really slowly and loud in a daemonic voice about a water bottle lying in the streat and we'll let a car run the red light just for fun and see if they handle it well. I don't really care for my cane to have a gps, because my phone can already do that. However, above the knee objects would be useful to have identified. Also, can I swap in new batteries when they die if I'm on a long trek?

These are a few thoughts I have on smart canes, the flawed arguments that generally lead to a device that doesn't really need to exist, and the few cases where the tech could be useful if someone squints hard enough. Overall, I'm not particularly interested in seeing a smart cane come into existence, because it just sounds expensive and like a less than ideal weight addition. It seems like people are trying to modify a kitchen knife to heat food instead of creating an oven, and then generating media buzz about their new super cool knife that can also toast your bread.

In regard to NFB Resolution 2019-02: Regarding the Continued Exploitation of Workers with Disabilities under Section 14(c) of the Fair Labor Standards Act

In regard to NFB Resolution 2019-02: Regarding the Continued Exploitation of Workers with Disabilities under Section 14(c) of the Fair Labor Standards Act, I wish to approach this in a different way than by using condeming and deploring against all such organizations. When people are condemed and deplored, they do not want to work to create useful solutions to problems, and the people the NFB aims to serve end up losing the most as a result of defensive additudes on both sides. No act of wrongdoing is solved by more wrongdoing, and even though it is difficult to be firm but kind when people are being harmed, we must stand strong and be the positive change we wish to see, by showing these organizations they are not improving lives, but are actually leaving people without a future. As an organization with a strong motivation to work for more just disability policies, and to improve the lives of the blind of the U.S.A. the NFB can set an example by working with these organizations to turn this unjust law into a much better program. I would encourage the NFB to take a multi-pronged approach to this resolution. Firstly, any wrongdoing needs to be pointed out, while ensuring the offending organization does not want to fight. Meaningful policy is achieved when the opposing side wants to create a better solution, and forcing change through shaming often causes quite the oposite to occur. Secondly, the law must be changed. Deploring and shaming is not going to change this law, and forging strong allies may improve our chances of repealing this provision.

A while with Google Chrome


I have not updated this blog recently, but most points mentioned here have been corrected by now. Take this article with a grain of salt. I'm not deleting it for historical purposes.

A few months ago, I decided to give Chrome a spin for a while. I started out by simply using it to do simple searches on Google and reading articles. The next step I took was testing it on apps such as facebook and twitter. Then, I progressed to Gmail and complex web apps.

In the following sections, I will outline several pros and cons of Chrome, with some audio examples using NVDA. /I am exclusively an NVDA user, and thus I never use Jaws, so please don't ask me how Chrome works with Jaws, because I haven't tested this. I know how to use Jaws, but just simply do not have a need for a commercial expensive screen reader in my day to day life.

As of June of 2017, Chrome accessibility can be described as mostly sufficient for day to day use, with a few caveats, which are being fixed. In some ways, it is better than Firefox, although it is certainly rough in a few areas. However, a massive quality control checkup is needed on some areas, as outlined. Also, keep in mind that Firefox has more than a decade of actual users beating on it and battle testing it, but now has a lot of legacy code that is going to make for some pain in the near future, for a while with the switch to multi process support. Firefox being an old codebase means that it is less buggy, and fantastic, but it also lags in some areas. on the other hand, Chrome is a newer codebase, and has been multiprocess from the start. More on this later.

The benefits

Speed in large complex web apps:

Simply put, Chrome outperforms every web browser in large web apps. If I take Firefox for a spin with Google drive it is very easy to notice lag. If I press down arrow in the files list, Chromes response time is a lot faster. This is an audio demo of Firefox, Chrome, and native Google drive. In Facebook, Chrome is better for speed. I don't have an audio demo of this, because I don't want private info from Facebook being exposed to people. However, the effect is not as heavy. Anecdotally, Twitter seems also faster when navigating from tweet to tweet, although less so than in drive. However, Chrome has other issues on twitter, which make it impractical, and I will discuss that later. The moral of the story is that Chrome outperforms other browsers in most spaces, at least where speed is concerned.

Security indicators.

One of my favorite things about Chrome is that if you press shift+tab from the address bar, you are placed on a menu with security info. If you press the menu button, a panel expands with a dialog explaining the security settings, and anything this site has asked you permission to use, and the status of whether you granted it permission. This is very accessible. I was honestly surprised by this. It is very nice. Also, I get the different level of security indicators such as this site being verified as a company, or just secure, or even no security. (There should be a label of insecure or mixed security on the mixed security menu IMHO). One of my annoyances with Firefox is as a blind user, I am forced to go view the certificate for this from the page info, and this is painful, and leaves many less experienced users vulnerable to phishing. The padlock is simply not accessible, and the "green" isn't shown in any way. I hope that when Google warns users if they put data into a form which is insecure, that this is made accessible, and alerts the user in a similar way to that which we get currently. It doesn't seem this is blatantly obvious in canary, but it should be in the label of the form or such, so users don't miss it. My only annoyance with this is the Google official page seems to not explain this to a screen reader user as well. (In that menu click learn more to see what I mean). They should say what the alt text of this menu is, rather than just the color of the padlock.

accessible extensions bar

The Chrome extensions bar is accessible. In Firefox, most less than advanced NVDA users, and the clear majority of JAWS users simply can't get to the addon bar. It requires object navigation with NVDA, and has no built-in keyboard accessible way to get there, and arrow along it. This has always annoyed me, because there should certainly be a way with the keyboard to get to the descriptions 3. In Chrome, this is a breeze. You simply press f6 from a web page, press tab or shift tab (Either will eventually work), and you are on the bar of extensions. From here, you can press the arrow keys, do the twist, and eat cake. (Okay, those last two are obviously a joke and a fair bit random, but all people have been commanded to have fun in life, otherwise, they will turn into a prune). Once on the extensions bar, arrowing along it lets you move to each extension, and you can activate each extension by pressing its button (Assuming the extension is accessible). My only complaint here 2 is that the Chrome bar doesn't allow you to use single letter navigation. Firefox doesn't offer any way for keyboard only users to focus extensions list, or the Firefox menu for that matter (IMO this is a serious shortcoming).

Very simple UI.

The Chrome UI is very simple. For beginners, it gets out of the way. This is one of the best things about Chrome. I am used to a browser with tons of menus, so it was very weird at first, but it made me realize how little I use the random features that are scattered about. Chrome has most of them still, just not as surfaced.

Shortcomings or annoyances.

Well, those are my noteworthy features. Here are some annoying things. Note that I list general bugs, but I don't go into super detail on every little bug, because Canary is so rapidly improving that if I list bugs and say every little bug that annoys me, the list will be out of date in a month. That is indeed a good thing.

Chrome live regions are broken.

10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 said Twitter, but alas, I didn't know!!! What? twitter is accessible right? Yes, Chrome live region support is broken still in some ways. Some live regions don't read, and some of them read twice. Aria-atomic is ignored sometimes, and other times, Chrome reads and spells live regions multiple times. In short, this part of Chrome is being worked on and badly needs love. In a test I did, adding text to pages with innerText and innerHTML yields different results. Aria-atomic is ignored completely, (It might be fixed in canary based on last minute tests I performed, but there are other broken cases), and acts differently based on the insertion method. No one supports aria-relevant (From the SR side) so I didn't test this. Also, role="alert" is very different than plain aria-live, and Chrome isn't marking role of alert with "alert" roles in the IA2 tree. I need to file a ton of bugs with Chrome for these cases, but I wrote up an aria-live workout page to test things with 3. there are several controls on the page. The pages divided into 2 major regions, the first is to test polite live regions, and the 2nd is to test assertive live regions. Within each section, there are many controls. there are buttons to update the live region for each subsection with the innerHTML and innerText methods, and there are buttons to do each of these things 10 times (to see how well it does at updating the live region multiple times). The atomic ones should speak hello and then the unique number. finally, the toggle button which says " toggle ouch" toggles an alert being thrown every 2 seconds which says ouch. This is designed to test whether the browser properly is working with the alert role. every time a button is pressed on the page from the regions other than the full-page controls, the status bar updates. The status bar should be treated as a live region as well. this page is not user-friendly, it's simply something I wrote so that I can make sure I understand what's going on before I filed bugs to fix live region support. in short, this test page that I wrote is giving me the ability to test Chrome live region support against other browsers and the aria spec, and make sure it is working as intended. when using Google Docs with NVDA in Chrome Canary, the experience was quite decent, even though these applications use lots of life regions. This was after a few bugs were hunted, so the experience will probably be less than stellar on the stable version.

editor bugs

every browser vendor ever has hated doing editors. Editors are extremely hard to get right. After all these years of accessible Firefox and constant effort to make it better, we still don't have a bug free editor. This is no different in Chrome. when I say editor, I mean the edit control and associated code for making such things as typing into text fields and complex rich edit support work. this problem is hard to solve, because there are simply a ton of edge cases. There are no shortage of editor bugs in Chrome, and most of them are very hard to reproduce, which makes reporting them a challenging task. Fixing these bugs is an ongoing job. I'm sure this will get better over time, but do not be surprised if blank lines report as full of the text on the line before them, or such.

Quality control help needed

Google needs to have stronger accessibility policy in the quality control department. It is the case that dialogues all over Chrome, (and other Google products) have unlabeled buttons. This has always been the case, and it seems to happen more with Google products than with other major companies. It is also common for new apps to be released with tons of easy to fix and easy to catch accessibility bugs, which could be fixed by giving all employees accessibility training and baking awareness of accessibility into developer culture, no matter what role a developer plays. for a company this size, I am surprised and very disappointed that something as important as the extensions dialog and download manager have simple accessibility bugs such as unlabeled buttons. A lot of these simple problems, such as links with base 64 encoded gobbledygook text, can be detected with automated testing tools, and could be caught before they ever go live. Lack of page structure should be caught in code review, and a user should at least be testing core dialogs such as history for usability. A strong accessibility policy can rocket a company towards fixing these problems. I urge that a strong accessibility policy is implemented in the next few years, which helps to address some of these problems before they ever arise.

Download manager

the download manager has an unlabelled button after each download. When enter is pressed to download a new file, there is no feedback given to the user that file download is in progress, and until the user presses F6 a few times to land on the new area of the screen which popped up, the user will be clueless to the fact that this exists. Once a user is in the new download area, they can tab to the controls in it, but the button to open the downloaded files more options is not labeled. Also, it is not clear to the average user that pressing enter on the download will launch the file (potentially causing trouble if it launches an executable program or such).

extensions store

The extension manager's main page is accessible. However, installation of extensions is where the roses die. Trying to install an extension is like drinking an entire soda at once. It hurts. the tabs (I assume) on the top of the page with extensions, apps, and such, are simply links, where a list or tab control would make it feel more like an app. There are radio buttons, which are half checked labeled "and up". Worse, is a very weirdly labeled set of links on the homepage. Snippet below.


This is an excerpt of a chunk of the links on this page which I see. Along with this, are a myriad of unlabeled links, a complete lack of structure for the entire page, and no headings. From a usability point of view, this page has no value whatsoever for a screen reader user in its current form, other than the search field. If rated on Wcag2, it isn't even level A compliant, not even close. I can't list everything needing fixed here if I want to, because I don't know what is being presented.

Other notable page quality control issues.

  • The history page is a structure desert too. The same is with the bookmarks page. These two pages need a complete accessibility rework.
  • The settings page has seen some love in the last few months. If you still see problems, report them.
  • Some dialogs can't be read in browse mode, and that issue just got triage, so I know it's being investigated.
  • There aren't labels for access keys in the menus.
  • lately, right click menus don't gain focus when the apps key is pressed. This issue was tracked, and triage. However, it was a regression and hasn't bubbled out of canary, so that's good news. [5]:

Developer tools

The Chrome Dev tools is not accessible enough. As a developer, I understand that some people depend on dev tools for their job. If dev tools aren't accessible, a blind person can't develop for Chrome very easily. It may be the case that if a developer can't read the JavaScript errors on their page, they lose their job. I really think dev tools needs careful thought at being fixed in Chrome, and that it isn't sidelined as a low priority issue, because a recent stack overflow survey found that almost 2% of developers are blind, and only 4% of despondence of the survey chose to disclose their disability. 4 If I as a developer (In that two percent) cannot debug my code, I lose my job. If I lose my job because the dev tools aren't accessible, it is quite frustrating, degrading, and not pleasant, aside from the fact that blind developers face lots of other challenges getting a job in the first place.


Chrome is becoming a great competitive browser for screen reader users, and it is doing this in leaps and bounds. We have progress to make before it is a polished experience, but finally, there is competition in the accessible browsers market. I am very excited to see accessibility in Chrome now, (Back in the old days, when dinosaurs roamed the earth, and I was a boy, it had no accessibility at all). Jokes aside, I will continue to monitor Chrome closely, and keep using it, along with Firefox, on a daily basis.

The Objects: AutoPropertyObject

In this first article, I will begin doing a deep dive into NVDA's design. I will explain the most base object common to many components, and how it is used. I assume you have familiarity with the NVDA development Guide, and the Design overview. Starting at the bottom of the inheritance hierarchy of objects you'll find NVDA using all the time are the AutopropertyObject, of AutoPropertyType. I aim to give you a basic understanding of how to utilize the full power of AutoPropertyType, butt understanding its source code is an exercise left for the reader of NVDA source code, or advanced developers with an intimate understanding of how python works internally.


Okay, what's this AutoPropertyObject?

class AutoPropertyObject(object):
    """A class that dynamically supports properties, by looking up _get_* and _set_* methods at runtime.
    _get_x will make property x with a getter (you can get its value).
    _set_x will make a property x with a setter (you can set its value).
    If there is a _get_x but no _set_x then setting x will override the property completely.
    Properties can also be cached for the duration of one core pump cycle.
    This is useful if the same property is likely to be fetched multiple times in one cycle. For example, several NVDAObject properties are fetched by both braille and speech.
    Setting _cache_x to C{True} specifies that x should be cached. Setting it to C{False} specifies that it should not be cached.
    If _cache_x is not set, L{cachePropertiesByDefault} is used.

What in the world is this meta=AutoPropertyType? To explain this, let's bring out the NVDA Python Console! Fire up a browser, ensure NVDA is in browse mode, and type this in. Ignore the >>> stuff, that's the prompt as you probably know by now.

<class 'virtualBuffers.gecko_ia2.Gecko_ia2'>
<class 'baseObject.AutoPropertyType'>
>>> type(type(type(focus.treeInterceptor)))
<type 'type'>

Um ... so the type of the tree interceptor (More on those later) is of type virtualBuffers.gecko_ia2.Gecko_ia2. However, things are strange. the type of the type is not type! But it's type's type's type is type. Most python types are of type type. However, the meta class, which is indeed an advanced topic beyond the scope of this article, changes the type of an object. it's metta of. What's happening here is that virtualBuffers.gecko_ia2.Gecko_ia2 inherits from this AutoPropertyObject with some number of other classes sandwiched in the middle of this hierarchy, thus, it takes on the type of AutoPropertyType. This 'type' exists to create some special behavior in the core objects NVDA uses. So let's discover what AutoPropertyObject does.

The magic starts here!

Assuming you aren't familiar with python properties, here's a ten second crash course. A property is just like a variable, but with magic hooked up. Let's assume we have a class Rectangle. We instantiate a rectangle with sides 5 and 7, with an area and perimeter. Let's get some properties from it.
Here's class Rectangle.

class Rectangle(object):

    def __init__(self, length=0, width = 0):
        self.__length = length
        self.__width = width

    def length(self):
        return self.__length

    def length(self, length):
    if length < 0:
        raise AttributeError("In this universe, rectangles are tangible thingies.")
    self.__length = length

    def width(self):
        return self.__width

    #Same method name yes, but we have two arguments, so overloading.
    def width(self, width):
        if width < 0:
            raise ValueError("Negative widths result in invalid rectangles. Learn math, and try again.")
        self.__width = width

    def area(self):
        return self.length * self.width

    def perimeter(self):
        return self.length * 2 + self.width * 2

    #You can also use the property method, but that's considered homework for the user.

rect = Rectangle(5,7)
#Let's do stuff.
rect.area #35
rect.perimeter #calculates 5*2+7*2 and outputs 24
#Let's do some horrible things.
rect.area = 55 #ouch, this raised AttributeError? Why?
rect.length = 2 #oh, hey, it worked
rect.length #Hey, looky, it returned 2.
rect.perimeter #calculates 2*2+7*2 and outputs 18
rect.width #Hey, 5 came out.
rect.width = -3 #ValueError is raised. How neat.
rect.area #14 comes out. Wow, it computed that on the fly?

As you can see, properties can allow  us to control what happens when we ask for a value. We can ask python for rect.area, and behind the scenes, a function defined by the property called a getter is asked for the current value. (Getter: get the value associated with the property). If we do rect.length = 1000000, the properties setter is called with the value we want to set. This way, we can make a read only var, or control what val gets set, etc.

Basically, what autoPropertyObject, by way of the AutoPropertyType, does for us automagically, is allows properties to be defined with some fancy method naming. If we read the doc for this class, we see that functions called _get_x and _set_x are mentioned. Why? Also, what's this caching about?

class TestFancyProperties(baseObject.AutoPropertyObject):
    cachePropertiesByDefault = True
    def _get_x(self):
        return getattr(self, "_x", 1)

    def _set_x(self, x):
        if x < 0:
            raise ValueError("Negative number? Um, I don't think so!")
        self._x = x

    def _get_xSquared(self):
        return self.x * self.x

 def _set_xSquared(self, x):
        raise ValueError("No! You can't change properties of math, not in this universe at least!")

See how cool that is? now, when we instantiate that, and do thing.x, we'll either get 0, or the previously set x. If we do thing.x = 5, x will magically report as 5 from now on. If we do thing.xthing.xthing.xthing.xSquaredthing.xSquared, you'd think that would be more expensive given that it calls into the _get_x function, right? No. It is actually caching the value of x for one core pump of NVDA. This is good, because thing.x being gotten multiple times will save us from calling the getter. In this example, that is a really cheap operation, but let's assume that getter is getting the accessible parent of an object. That may easily be a COM call away. We don't want to write code like this do we?

if self.parent and self.parent.windowClassName == u"window":

If we do that without a cache, that might be 4 or 5 com calls. We can reduce that to 2 com calls with caching getters, and that's done magically for us.

Take away message!

Basically, if you see a method _get_thing or _set_thing, remember that you can just say blah.thing and set blah.thing = "to a value". Also, if you can't find for example, how is defined on an NVDA Object, that would be because is really an AutoProperty. I have one last word of wisdom about AutoProperties (Or dynamic properties).

This is really really really important. I can't stress this point enough. If I had known this when I started developing NVDA addons, I would have saved myself several hours of wishing I were not debugging silly errors. Any constructor you override, if the class derives from AutoPropertyObject, Must call super. Yes, you really really should always call super if you override something, but let's face it. We're all stupidly lazy. The goal of a programmer is 1. Write the least code possible, and 2. write the least comments possible explaining the code, so that we have job security in 20 years still (Hahahahahahaahahahaha boss, take that). Back to the important stuff. If you ever do make the mistake of forgetting to call super in a class that is derived from AutoPropertyObject, no matter how far down into the chain it is, you will get an error to the effect of "I can't find _autoPropertyCache." This is because it tries to cache things, but the cache is never created in the constructor.

I hope this was useful. Please feel free to contact me if you have comments or questions. I'm lazy, and can't be bothered to deal with comment spam bots on this site. Sorry.

How I rolled two factor authentication for ssh access to the server that powers this site.

Update: some details in this post are out of date. I'm not running the same hardware I was when I wrote this now, and converted to systemd.

SSh and two factors (or more) of authentication.

Best practices in IT security dictate that to harden a computer against attacks, one should require a user to enable two-factor authentication. In a two factor authentication scheme, the user who is submitting credentials to a service must submit two different forms of proof they are who they claim to be. (Note that by service, it is most often thought of as a website, but in this case the credentials are to the ssh demon). In my first blog post, we are going to explore possible setups for deploying two factors (or more) of authentication to the ssh demon, such that a user must either provide ssh key, or password. Then they must provide two-factor auth from google authenticator. My current setup is running on a Digital Ocean droplet, and is the cheapest option available (512 mb ram and 1 cpu core and 20 gb ssd storage). I am running Ubuntu 14.04 lts, and have an nginx server setup behind what I am protecting with two-factor auth. It is being used to serve the website you are reading.


In IT security, two-factor authentication is commonly known as a scheme where you harden a system by ensuring two forms of authentication. Usually you want something you know (password, SSH Key Password) and something you have (SSH key in encrypted form, google authentication codes, biometric thing, Yubikey or smart card). This is what I call the "something you know and something you have" rule. If possible, you need to (either by policy or by technical measures) force the something you have rule. It is hard to do by policy because let's face it, people are lazy and don't want to get their phone out to prove the something you have part.

Pre recs

You will need a Linux distribution (any UNIX should work, but I have only tested this on Ubuntu Linux). To keep consistent with what I have, disable root login over ssh, and ensure you have a firewall enabled but allow port 22 (or if you changed the port ssh is served on use that port). I also am using public key auth with an ssh key and a strong ssh key password. I also have sudo privileges on my regular account so that I can upgrade myself to root for any commands or use super user. If you disable root login without enabling sudo privileges for your account you will not be able to use the root account again and you probably don't want to do that.


Keep an extra ssh session open right now before you start. Do not log out of this ssh session or if you break things you may end up trapping yourself out of the server and there will be no idiot switch to enable ssh again. (Have fun using the machines real console). I had two ssh shells running, and logged out of one when I was done, and didn't log out of that one until I knew things were working properly. Also backup the files I ask you to edit, just in case.

Let's get started

First things first, (obviously) ssh into your server.

Required packages:

Now, get the packages for google authenticator. I assume you are not running as root and thus use sudo.

$ sudo apt-get install libpam-google-authenticator

Setting it up:

Now, run the google-authenticator command in the context of every user you want to have access to the server. I didn't use match in the ssh demon file, and therefore, two factor is enabled for all users. There are ways to selectively use two factor for only certain users that I didn't show. If you need to run google-authenticator as bob and you are logged in as root I think you can use the su command. Up to you to figure out that one. Alternatively, before continuing, make all users of the system run google-authenticator and follow the prompts. Do not continue until all accounts are secured with two factor.

$ google-authenticator

It asks if you want time based codes. I recommend you answer yes.

It asks if you want the config written to your ~ directory. You must answer yes.

Now it will display a qr code (maybe) a secret key, and some rescue codes.

Copy all the rescue codes down somewhere safe. Copy the secret key somewhere safe as well. Note that the rescue codes are your get out of jail free cards. If you lose them and lose your phone with google authenticator on it, you are locked out for good. (In other words, if you lose them change your secret key by running this step again).

Enter the secret key into google authenticator.

IMPORTANT: In order to make the something you have part of two-factor auth work, I really don't recommend using a two-factor authentication app on the pc you are SSH'ing into. This basically removes the point of two-factor authentication because the something you have is gone. Use a phone or separate PC. Phones are nice because they really are something you have.

Now enter what you wish for these prompts.

When all is done, you should have a  .google_authenticator file in your user directory.

$ ls -a ~ | grep google

If it prints .google_authenticator you should be good to go.

Edit the config files

Use your favorite text editor for this.

$nano /etc/pam.d/sshd

Add this line to the file: For future reference, anything in triple quotes should be added or edited.


auth required


One tutorial I saw said to add this to the top, but I added it to the bottom after it failed to work properly. This is just including the shared object (library) for google auth.

Now edit


$ nano /etc/ssh/sshd_config

If the ChallengeResponseAuthentication exists, set it to yes. Otherwise, add the appropriate line. Note it may be commented.


ChallengeResponseAuthentication yes


I now added this line to that same file.


Authentication Methods publickey,keyboard-interactive


This sets up a chain of events. If you don't want to use a public key (I recommend public key crypto over passwords always), then don't do this, and be aware that if someone logs into the machine with a public key, two factor auth won't work because it is going to assume that if the user has a public key, they don't need to enter a password and two factor token.  Log out and try logging back in and see if it worked. If not, you should not log out of the other ssh shell.

Typical chain of events:

  • User Malinda gives her username.
  • SSH shell asks her for public key challenge. Her client provides the appropriate key.
  • Since she is a good and security conscious user of her computer, she is required to type in her ssh key password before the challenge is complete.
  • Now she is asked for her account password. She provides it.
  • The system now asks her for a two factor auth token. She complies.
  • If the two above pieces of information are correct, then she is in. Otherwise, access denied, it asks for password and two factor auth again with no indication of password or auth was wrong.

Other Notes:

If you have a Yubikey, there is a Yubikey module At the Yubikey page. I haven't played with it because I don't have a Yubikey.

It is possible to disable password auth altogether so it just requires username>ssh_key>auth_code but I didn't do that. To do so, see

Also, if you are a company with needs for high security or have multiple (pesky) humans to enforce two factor auth with, you can better enforce the two factors of authentication if you use the Yubikey. This is because it is much harder to install an application to do the second factor of auth. A Yubikey is a physical device and thus is truly something you have.


Weather app

This is a web app implemented with django that aims to present weather data in  a simple format. I host this on my website running in a django instance. I currently am copyrighting this under the GNU agpl license because I want anyone who wishes to see my code to be able to see it. The core of this app came from a python based command line weather app I wrote to play around with the dark sky companies API. see for more info on the quite nice weather API they provide.

how it works:

It uses the native javascript location api to get the users current (quite precise) location. It then loads the weather data requested asyncrinously through ajax. On the back end, I am using post requests and a simple api located at /weather/forecast. Logic decides which subpage to load based on the parameters given in the request. I would do this differently if I rewrote this, it's a pile of junk how it works, but it was my first web app, so hey. The front-end presents most things textually. I may use the platform I have built to explore audio representations of weather radar. I built a little hacked together weather chart where I map tones to temperature and volume to chance of precipitation. I might explore using 3 dimensional audio and other factors to represent weather phenomena in a audio weather map, for once giving the blind the ability to see oncoming rain storms or threats from thunder or just to look at the next hours radar like a sighted friend might.


Virtual Clock Tower

I am minorly enthusiastic about antique clocks. I don't really know that much about the mechanical workings but have an interest in them. This little program was a prototype I created to turn a desktop computer into a clock tower. It sucks because I can't find decent recordings of the proper bell sounds. Anyhow, it was an experiment I may put a ui on some day to see if I can turn the computer into a device that will engage children with an integral part of our societies history so they become interested in studying antique clocks, especially the massive towers that house clocks in some of the worlds coolest cities.


Two math problems on money and sequences.

Math problem for the mentally curious

For those who enjoy a math problem to finish off a great day on this planet, here is a problem for you. I randomly came up with it while watching the democratic debate last night (must have been bored).  It is two problems, the second one follows from the first.

Problem statement!

Suppose you owe me money from a recent lone. Your unfortunate situation lets me demand money from you in either one of two ways. You get the choice to choose one of them .

Option 1:

You pay me 1 dollar a day for $$n$$ days.

Option 2.

You pay me 0 cents today (day 1), 1 cent tomorrow (day 2), $$0+1+2$$ cents on day 3, $$0 + 1 + \dots + i + \dots +(n-1)$$ cents on day n.

The dilemma:

[1]: At what day does it become not profitable for you to pay me with option two.

[2]: Additionally, create an expression to tell me how much money I will profit if you pick option 2. Hint: Negative profits for days 1 and so on, are possible until the day when statement 1 above is true.

Problem statement!  For problem 2

Suppose you owe me money from a recent lone. Your unfortunate situation lets me demand money from you in either one of two ways. You get the choice to choose one of them .

Option 1:

You pay me $$x$$ dollar a day for $$n$$ days.

Option 2.

You pay me 0 cents today (day 1), 1 cent tomorrow (day 2), $$0+1+2$$ cents on day 3, $$0+1+ \dots +i+ \dots +(m-1)$$ cents on day m.

The dilemma:

Find an equation for x and n that calculates what day it becomes not profitable for you to pay me with option two (in other words, figure out which day $$nx = m$$. Additionally, create an expression to tell me how much money I will profit if you pick option 2. Hint: Negative profits for days 1 and so on, are possible until the day where $$mx = n$$.

Crash Hero!

NVDA Developers and bug smashers! ATTENTION! This is an important announcement from the department of release stability management. Recently, a new member joined the NVDA community. Her name will remain anonymous, but you may refer to her as the crash hero. In fact, she is the first NVDA superhero. She exhibits her superpower in the form of an NVDA Add-on that can save all your crash dumps in a folder of your choosing on your computer and she does this automatically when NVDA reboots after a crash. She even asks you what you were doing before the crash, and logs it in a messages file. With her completely accurate perception of time and date, you will always know when crashes happened, because she names each crash as a timestamped folder inside your crashes directory. Let the crash hero save you from having to remember where to find the crash dump and keep track of what exactly you were doing before a crash occurred.

Now that I've introduced the crash hero, on behalf of the crash hero, I would like to invite you to experience the thrill of never having to open your temp directory and frantically save the crash somewhere when a crash happens in the middle of a homework assignment or business meeting. The crash hero will save crashes in your user folder by default, but by selecting the crash settings item in your NVDA preferences menu, you can pick a custom folder to log all of your crashes in! The crash hero is here, and the crash hero is ready to help you if you are someone who regularly runs snapshots of NVDA so that you or other developers can catch bugs before they sting the general masses.


Source code (The crash hero is only made stronger by others contributing):

Changes to Googles Youtube for IOS, and how to use it with voiceover.

Youtube was recently updated, and several voiceover changes were put in. At first, you may do what I did. "oh, damn, it, google, stop, breaking, things!!!"

It turns out that google actually fixed a lot of things in this version, making the user experience more streamlined and much more efficient. They did seem to break one thing though.

The story about the video player

In this update, you will find the traditional video player, and below it the video title. It used to be there was a more actions button near there, and there was also a button for expand description. All of that is gone. instead, you'll notice voiceover says "actions available" just like home screen or mail or a lot of other IOS apps. The available actions are as follows, as tested on a video by sciShow space.
  • Activate item <default> (Likes the current video, there is no way the user would know this without trying it out for themselves)
  • More actions: Expands the options for share, and add to list.
  • Subscribe
  • [If subscribed to current channel]: send me /stop sending me, every notification for this channel.
Now, flicking left/right will bypass everything that used to be there, allowing yourself the ability to more efficiently move by flicking through the app. This is great. They've also now changed the like/unlike. To like a video, press the default action, this seems to like the video. Or flick down to more actions, and you may see numbers like 8k and 2k. The first is like and the second is dislike (I think). They aren't currently labelled with like or dislike. There's also a handy share button. The only thing missing is a way to view the video description. There may be features I don't know about missing though.

The videos view

The videos view used to suck. You flicked right and it said "more actions" and other things. Now, simply flicking right sends you to the next available video. Again, you can flick down or up here to do the following:
  • "Activate item" <plays the video>
  • More actions, <Bring up a flyout allowing you to  do these things:
    • Express you aren't interested in seeing this video ever again.
    • add to watch later.
    • add to playlist.
    • share ...
    • cancel <can be activated by scrubbing as well>
This enhancement is really a nice move by google, I'm glad to see this. It was confusing at first, because I didn't realize this had changed, but now that I figured it out, it's quite welcoming.