An Apple Swift Student Challenge interview: Twice champion Kai on ‘Seymour’ and coding for a cause

In this interview, he shares insights into his journey, the challenges he faced, and his aspirations for the future.

Image Source: Kai

Image Source: Kai

Kai’s journey into the world of app development began with a deeply personal motivation. When he discovered his high risk for retinal detachment, a condition that could lead to vision loss and blindness, he channelled his concern into a mission to help others.

This determination resulted in his innovative app, Seymour, which aims to assist visually-impaired individuals by restoring a sense of depth through a built-in “depth map” and camera feed.

Kai’s passion for coding, sparked four years ago by his father’s encouragement to take a programming course, has led him to win the Swift Student Challenge not once but twice. In this interview, Kai shares insights into his journey, the challenges he faced, and his aspirations for the future.

Image Source: Kai

Image Source: Kai

Congratulations on your second Swift Student Challenge win, Kai! Could you tell us more about the inspiration behind your app Seymour?

The inspiration for the concept came from the TV show, Avatar: The Last Airbender, featuring a character named Toph who can "feel" the environment around her by sensing vibrations through her feet. I wanted to recreate that in a form of assistive technology, to allow real-life visually impaired individuals to have a greater ability to understand their environment.

The mechanism was inspired by a TED Talk by David Eagleman, who showed how vibrations in a vest helped deaf people regain their hearing. Seymour applies this idea of routing the information from one sense through a different sense by, routing visual information through sound and touch. 

How does the "depth map" feature in Seymour work, and how does it enhance the experience for visually-impaired users?

Seymour uses an iOS device's camera feed to create a "depth map" of the environment. This map is created by feeding each frame of the feed to a neural network AI model. This model makes use of monocular depth estimation, where depth can be estimated from a single camera, instead of the two "cameras" that humans usually have (eyes).

Although I have not had the chance to test Seymour with any visually impaired users, the depth map is meant to give a more comprehensive set of information of the user’s environment. If their finger is placed on the left side of the depth map and they hear a higher pitch, then they will know that the objects in that area are closer to them, potentially indicating things like walls. By swiping their fingers across the depth map and listening to changes in tone, a visually impaired user would be able to identify walls, corridors and obstacles that may be too far for a white cane to identify.

Image Source: Kai

Image Source: Kai

Your coding journey began four years ago with encouragement from your father. What was that initial experience like, and how has your approach to coding evolved since then?

My coding journey started during the COVID-19 pandemic when my father signed up for a C programming course and let me complete it. Learning C as my first programming language was like dropping me in the deep end. I had to become familiar with low-level things like memory allocation. I had to implement a lot of conveniences myself instead of them being provided to me by the language. 

Despite the initial difficulty, this was the best way I could have possibly been introduced to programming. C gave me the foundational understanding that made learning other programming languages like Swift extremely easy. My approach to coding has stayed much the same: taking a complex problem and breaking it down into many smaller parts. In C, this was necessary – I had to implement almost everything myself. In Swift and other modern languages, breaking down complex problems helped me understand and tackle them effectively.

Winning the Swift Student Challenge twice is a remarkable achievement. What were some of the biggest challenges you faced while developing Seymour?

When I won, I was very excited because I'm 2 for 2 in winning the Swift Student Challenge, and also because I saw the Airpods Max in the prize list. I had looked forward to having more time for my submission when the SSC was announced three or four months earlier than normal. I decided to make something more technically ambitious, using the camera and AI. I also had an iPad to test out my app this time, so I could do more advanced touch-based things than I could when I was working with just my Mac previously.

I had two significant challenges when I was developing Seymour.

The first major challenge was configuring the audio tone. Seymour plays a pure tone when the user puts their finger onto the depth map, and its pitch increases the closer the objects are in the depth map. Playing pure tones that vary constantly instead of playing the same note for a few seconds was surprisingly tricky, and I ended up taking apart an open-source piano app to understand how to implement it.

The second major challenge was a time crunch at the end. During the two weeks leading up to the Swift Student Challenge deadline, I was on a school trip to San Francisco and was only able to work on Seymour occasionally. I managed to complete it, cutting it very close. I submitted it an hour before the deadline on my return flight with my father’s help. When I found out I won, it was very fulfilling feeling to see that all the time spent working on my submission, including rushing the final parts on a plane, actually paid off. 

Image Source: Kai

Image Source: Kai

Can you share any feedback or stories from users of Seymour that have particularly resonated with you?

Although I haven't had the chance to test Seymour out on my target users, I’ve reached out to charities that provide support and aid the visually impaired. I hope that Seymour will be able to improve their environmental understanding and eventually, grant them navigational independence, even in areas that do not have accessibility features built-in.

Aside from Seymour, what other side projects are you currently working on, and how do they contribute to your goal of helping visually-impaired individuals?

One side project I'm currently working on is a hardware add-on to Seymour that allows the user to be constantly "fed" real-time depth information through a grid of vibration motors on the back of their neck. This system uses the sense of touch instead of the sense of hearing. As hearing is usually considered the second most important sense after sight, there may be better solutions than crowding them with noise. A hardware vibration grid would allow greater detail than a finger swiping across the depth map to provide constant information. I hope that these different methods of delivering environmental detail to visually impaired individuals can eventually convey enough information that they're able to navigate complex areas by themselves.

Image Source: Kai

Image Source: Kai

Looking ahead, what are your future aspirations in the field of app development, and how do you plan to continue making a positive impact through your work?

In the future, I want to publish a few of my own apps. Not all of them need to be accessibility based, but I want to make sure every one of my apps positively impacts a person’s quality of life. I only have rough ideas for a few problems I'd like to solve, such as getting people like me to exercise or regularly meet up with friends. I hope that someday, that light bulb turns on, and I can write something truly impactful.

Our articles may contain affiliate links. If you buy through these links, we may earn a small commission.

Share this article