By prasads on Dec 04, 2009
You would have seen blogs on how CAFE can help you simplify the writing of a Conferencing application . Now a typical conference is incomplete without the facilities for playing a welcome message or the ability to record a message. CAFE now has the ability to support playing of media to calls and conferences and also recording calls and conferences.
How is this done ?
CAFE exposes a MediaParticipant ( org.glassfish.cafe.api.MediaParticipant) interface which represents the 'media-interaction' for the communication . A MediaParticipant is associated with a CommunicationSession and provides the recording support to either a call or conference, whichever is the "Communication" to which the MediaParticipant is added.
The MediaParticipant has two sub interfaces; Player and Recorder . Depending upon the operation(s) that one would want to support ( playing + recording or either one of them), a new MediaParticipant of type Player or Recorder would have to be created and added to the Communication ( the communication being a conference or call). The following snippet of code demonstrates how this can be achieved.
// create a MediaParticipant
Recorder recorder = session.createParticipant(Recorder.class, "recorder1");
Player player = session.createParticipant(Player.class, "player1");
// add a recorder as media participant
// add a player as a media participant
Once the MediaParticipant is added, a Player or Recorder instance which has been added to the Communication can be used to record and play. The Player and Recorder expose start() and stop() method to start and stop playing or recording. The trigger for starting and stopping the operation is determined by the application logic in CAFE. The following snippet shows the code which achieves what has been described above.
// start recording using the recorder object added to the conference.
// start playing using the player object added to the conference
Similarly the stop() methods would need to be invoked to stop the playing or recording.
What happens under the hood ?
CAFE uses the JSR 309 API to connect to the Media Server ( which is jVoiceBridge in this case ) and to control the conferencing, recording of media and playing of media operations in the Media Server. It would take a whole new blog to describe how the implementation for jVoiceBridge works, and that's in the works. However, to satisfy you curiosity I will describe in brief how is used in this case :
JSR 309 exposes the following interfaces
• MediaSession - representing a whole scope interaction of the application with the Media Server.
• MediaMixer - representing a conference in the media session
• NetworkConnection - representing a call
• MediaGroup - represents a logical group of Player, Recorder, DTMF detector and Signal generator
• Player, Recoder, SignalDetector and SignalGenerator - reprsents the player,recorder, signal detector / generator instances in a Media Group.
When a participant joins a conference , the NetworkConnection corresponding to the participant is joined to the MediaMixer representing the conference. At this time a MediaGroup is also 'joined' to the MediaMixer or the NetworkConnection as the case maybe (i.e. depending upon the fact whether the conference or caller is the target of the media operation ). When a playing or recording operation is requested from CAFE, the Recorder or Player instance belong to the MediaGroup which has been joined to the call / conference is retrieved and the appropriate methods to start and stop recording and playing invoked. The Player or Recorder implementation invokes the appropriate methods in jVoiceBridge to play or record media , hence abstracting the complexities away from the user.
Sample application used in this blog. This is a NetBeans project and starts recording a conference as soon as its created. One can also play a file using a simple HTTP Servlet interface.