Accepted Papers
Recurrent Halting Chain for Early Multi-label Classification
Thomas Hartvigsen: Worcester Polytechnic Institute; Cansu Sen: Worcester Polytechnic Institute; Xiangnan Kong: Worcester Polytechnic Institute; Elke Rundensteiner: Worcester Polytechnic Institute
Early multi-label classification of time series, the assignment of a label set to a time series before the series is entirely observed, is critical for time-sensitive domains such as healthcare. In such cases, waiting too long to classify can render predictions useless, regardless of their accuracy, while predicting prematurely can result in potentially costly erroneous results. When predicting multiple labels (for example, types of infections), dependencies between labels can be learned and leveraged to improve overall accuracy. Together, reliably predicting the correct label set of a time series while observing as few timesteps as possible is challenging because these goals are contradictory in that fewer timesteps often means worse accuracy. To achieve early yet sufficiently accurate predictions, correlations between labels must be accounted for since direct evidence of some labels may only appear late in the series. We design an effective solution to this open problem, the Recurrent Halting Chain (RHC), that for the first time integrates key innovations in both Early and Multi-label Classification into one multi-objective model. RHC uses a recurrent neural network to jointly model raw time series as well as correlations between labels, resulting in a novel order-free classifier chain that tackles this time-sensitive multi-label learning task. Further, RHC employs a reinforcement learning-based halting network to decide at each timestep which, if any, classes should be predicted, learning to build the label set over time. Using two real-world time-sensitive datasets and popular multi-label metrics, we show that RHC outperforms recent alternatives by predicting more-accurate label sets earlier.
How can we assist you?
We'll be updating the website as information becomes available. If you have a question that requires immediate attention, please feel free to contact us. Thank you!
Please enter the word you see in the image below: