Eyes play an important role in communication among people. Motions of the eye express emotions and regulate the flow of conversation. Hence we consider fundamental that virtual humans or other characters present convincing and expressive gaze in applications such as Embodied Conversational Agents (ECAs), games and movies. However, we perceive that in many applications that require automatic generation of facial movements, such as ECA, character's eye motion does not carry meaning related to its expressiveness. This work proposes a model for the automatic generation of expressive gaze by examining eye behavior in different affective states. To collect data related to gaze and expressiveness, we looked at Computer Graphics movies. This data was used as a basis to describe gaze expressions in the proposed model. We also implemented a prototype and performed some tests with users in order to observe the impact of eye behavior during some expressions of emotion. The results show that the model is capable of generating eye motions that are coherent with the affective states of the virtual character.