Detection and recognition of a stairway as upstairs, downstairs and negative (e.g., ladder, level ground) are the fundamentals of assisting the visually impaired to travel independently in unfamiliar environments. Previous studies have focused on using massive amounts of RGB-D scene data to train traditional machine learning (ML)-based models to detect and recognize stationary stairway and escalator stairway separately. Nevertheless, none of them consider jointly training these two similar but different datasets to achieve better performance. This paper applies an adversarial learning algorithm on the indicated unsupervised domain adaptation scenario to transfer knowledge learned from the labeled RGB-D escalator stairway dataset to the unlabeled RGB-D stationary dataset. By utilizing the developed method, a feedforward convolutional neural network (CNN)-based feature extractor with five convolution layers can achieve 100% classification accuracy on testing the labeled escalator stairway data distributions and 80.6% classification accuracy on testing the unlabeled stationary data distributions. The success of the developed approach is demonstrated for classifying stairway on these two domains with a limited amount of data. To further demonstrate the effectiveness of the proposed method, the same CNN model is evaluated without domain adaptation and the results are compared with those of the presented architecture.
You may also start an advanced similarity search for this article.