Recurrent neural network is no exception. See ImageNet competitions: it's common for a team to participate in different competitions with (almost) the same network. It's important to remember that (any) neural network can have several outputs (as correctly commented by The most widely known example is convolutional neural networks that solve object recognition and detection tasks using the same hidden weights (there're usually multiple heads on top of the deep network, one predicts the object class, the other predicts the bounding rectangle).
Nothing stops neural networks from doing this, and although I can't point to a concrete GitHub project right away, I'm absolutely sure it has been done. Is it possible to use other features in this classification problem as well? If not, are there any other algorithms that would be a better fit to this problem? We would not be prescient of the values of the "other features" in the future either, so either we need to predict them, or (preferably) the neural net would do the prediction itself. Goal: generate predictions for M timesteps in the future. Other features: other things that are related to activity, also represented in a sequence with a 1:1 correspondence with the input.
Input: activity of the last N timesteps (L, L, M, H, M. Problem: Categorize activity by Low, Medium, High Is it possible to use other sequence features as well besides the thing that we're trying to predict? But frequently, the training data and input data consists only of past sequences, i.e. generating arbitrarily long sequences of words) is a big use case. I read a bit about RNNs and it seems like sequence generation (i.e.