Deep neural networks have shown recent promise in many language-related tasks such as the modelling of
conversations. We extend RNN-based sequence to sequence models to capture the long-range discourse
across many turns of conversation. We perform a sensitivity analysis on how much additional context
affects performance, and provide quantitative and qualitative evidence that these models can capture
discourse relationships across multiple utterances. Our results show how adding an additional RNN layer
for modelling discourse improves the quality of output utterances and providing more of the previous
conversation as input also improves performance. By searching the generated outputs for specific
discourse markers, we show how neural discourse models can exhibit increased coherence and cohesion in
conversations.