intégration cohérente “WhatsApp-like” : GiftedChat pour la messagerie + WebRTC natif pour l’appel
intégration cohérente “WhatsApp-like” : GiftedChat pour la messagerie + WebRTC natif pour l’appel (1:1 au départ), avec un bouton Appel dans l’en-tête de la conversation et un CallScreen séparé (ou en modal). Je vous donne :
- le design d’architecture (RN ⇄ Signaling ⇄ WebRTC),
- les dépendances RN CLI,
- le squelette de code (ChatScreen + CallScreen + Signaling + WebRTC session),
- les permissions Android indispensables,
- les points “production” (TURN, reprise, états, latence).
1) Architecture recommandée (simple, robuste)
A. Messagerie (GiftedChat)
-
Votre backend “chat” (WebSocket/Supabase Realtime/etc.) gère :
- messages,
- receipts,
- pièces jointes,
- participants.
B. WebRTC (appel 1:1)
-
Vous avez besoin d’un serveur de signalisation (WebSocket) pour échanger :
- offer, answer, iceCandidate,
- événements d’appel : call_invite, call_accept, call_reject, call_end.
Le média (audio/vidéo) passe en P2P si possible, sinon TURN.
2) Dépendances RN CLI (Android)
npm i react-native-gifted-chat npm i @react-navigation/native @react-navigation/native-stack npm i react-native-screens react-native-safe-area-context react-native-gesture-handler npm i react-native-webrtc npm i react-native-incall-manager npm i react-native-permissions
(Et faites les étapes d’installation React Navigation + linking natif selon votre version RN.)
3) Squelette de code
3.1 src/services/signaling.ts (WebSocket JSON)
export type SignalMsg = | { type: "call_invite"; callId: string; to: string; from: string; room: string; video: boolean } | { type: "call_accept"; callId: string; from: string } | { type: "call_reject"; callId: string; from: string; reason?: string } | { type: "call_end"; callId: string; from: string } | { type: "webrtc_offer"; callId: string; sdp: any } | { type: "webrtc_answer"; callId: string; sdp: any } | { type: "webrtc_ice"; callId: string; candidate: any }; type Handler = (msg: SignalMsg) => void; export class SignalingClient { private ws?: WebSocket; private handler?: Handler; constructor(private url: string, private token: string) {} connect(handler: Handler) { this.handler = handler; this.ws = new WebSocket(this.url, undefined, { headers: { Authorization: `Bearer ${this.token}` }, } as any); this.ws.onmessage = (ev) => { try { const msg = JSON.parse(ev.data) as SignalMsg; this.handler?.(msg); } catch {} }; } send(msg: SignalMsg) { if (!this.ws || this.ws.readyState !== 1) return; this.ws.send(JSON.stringify(msg)); } close() { try { this.ws?.close(); } catch {} } }
3.2 src/services/webrtcSession.ts (PC + ICE + offer/answer)
import { RTCPeerConnection, RTCSessionDescription, RTCIceCandidate, mediaDevices, } from "react-native-webrtc"; export type WebRTCConfig = { iceServers: Array<{ urls: string[]; username?: string; credential?: string }>; }; export class WebRTCSession { pc: RTCPeerConnection; localStream: any | null = null; remoteStream: any | null = null; constructor( private cfg: WebRTCConfig, private onIceCandidate: (c: any) => void, private onRemoteStream: (s: any) => void ) { this.pc = new RTCPeerConnection({ iceServers: cfg.iceServers }); this.pc.onicecandidate = (ev) => { if (ev.candidate) this.onIceCandidate(ev.candidate); }; this.pc.ontrack = (ev) => { // RN-webrtc: ev.streams[0] généralement dispo const stream = ev.streams?.[0]; if (stream) { this.remoteStream = stream; this.onRemoteStream(stream); } }; } async startLocal({ audio, video }: { audio: boolean; video: boolean }) { const stream = await mediaDevices.getUserMedia({ audio, video: video ? { facingMode: "user" } : false, }); this.localStream = stream; // Add tracks stream.getTracks().forEach((t: any) => this.pc.addTrack(t, stream)); return stream; } async createOffer() { const offer = await this.pc.createOffer({ offerToReceiveAudio: true, offerToReceiveVideo: true }); await this.pc.setLocalDescription(offer); return offer; } async acceptOfferAndCreateAnswer(offerSdp: any) { await this.pc.setRemoteDescription(new RTCSessionDescription(offerSdp)); const answer = await this.pc.createAnswer(); await this.pc.setLocalDescription(answer); return answer; } async applyAnswer(answerSdp: any) { await this.pc.setRemoteDescription(new RTCSessionDescription(answerSdp)); } async addIceCandidate(candidate: any) { await this.pc.addIceCandidate(new RTCIceCandidate(candidate)); } async stop() { try { this.localStream?.getTracks()?.forEach((t: any) => t.stop()); this.remoteStream?.getTracks()?.forEach((t: any) => t.stop()); } catch {} try { this.pc.close(); } catch {} } }
3.3 src/screens/ChatScreen.tsx (GiftedChat + bouton Appel)
- Ajoute un bouton “Appeler” dans le header.
- Lance le CallScreen avec callId + peerId.
import React, { useCallback, useLayoutEffect, useState } from "react"; import { View, Pressable, Text } from "react-native"; import { GiftedChat, IMessage } from "react-native-gifted-chat"; import { NativeStackScreenProps } from "@react-navigation/native-stack"; type RootStackParamList = { Chat: { peerId: string; peerName: string; conversationId: string }; Call: { callId: string; peerId: string; video: boolean; direction: "outbound" | "inbound" }; }; type Props = NativeStackScreenProps<RootStackParamList, "Chat">; const ME = { _id: "user_1", name: "Vous" }; export default function ChatScreen({ navigation, route }: Props) { const { peerId, peerName, conversationId } = route.params; const [messages, setMessages] = useState<IMessage[]>([]); useLayoutEffect(() => { navigation.setOptions({ title: peerName, headerRight: () => ( <Pressable onPress={() => { const callId = `call_${Date.now()}_${Math.random().toString(16).slice(2)}`; navigation.navigate("Call", { callId, peerId, video: true, direction: "outbound" }); }} style={{ paddingHorizontal: 12, paddingVertical: 6 }} > <Text>Appeler</Text> </Pressable> ), }); }, [navigation, peerId, peerName]); const onSend = useCallback((newMessages: IMessage[] = []) => { setMessages((prev) => GiftedChat.append(prev, newMessages)); // TODO: envoyer vers backend chat (WS/Supabase/etc.) }, []); return ( <View style={{ flex: 1 }}> <GiftedChat messages={messages} onSend={(msgs) => onSend(msgs)} user={ME} alwaysShowSend scrollToBottom placeholder="Message…" /> </View> ); }
3.4 src/screens/CallScreen.tsx (WebRTC + signaling + InCallManager)
Ce screen :
- ouvre le micro/caméra,
- connecte le signaling,
- gère outbound/inbound,
- échange offer/answer/ice,
- affiche local/remote (je laisse la partie UI minimaliste ; vous pouvez la styliser ensuite).
import React, { useEffect, useMemo, useRef, useState } from "react"; import { View, Text, Pressable } from "react-native"; import InCallManager from "react-native-incall-manager"; import { RTCView } from "react-native-webrtc"; import { SignalingClient, SignalMsg } from "../services/signaling"; import { WebRTCSession } from "../services/webrtcSession"; type Props = { route: { params: { callId: string; peerId: string; video: boolean; direction: "outbound" | "inbound" } }; navigation: any; }; const SIGNALING_URL = "wss://votre-vps-signaling/ws"; const TOKEN = "JWT_USER"; // à fournir depuis votre auth const ME_ID = "user_1"; export default function CallScreen({ route, navigation }: Props) { const { callId, peerId, video, direction } = route.params; const signaling = useMemo(() => new SignalingClient(SIGNALING_URL, TOKEN), []); const sessionRef = useRef<WebRTCSession | null>(null); const [localStream, setLocalStream] = useState<any>(null); const [remoteStream, setRemoteStream] = useState<any>(null); const [status, setStatus] = useState<string>(direction === "outbound" ? "Appel…" : "Appel entrant"); const cfg = useMemo( () => ({ iceServers: [ { urls: ["stun:stun.l.google.com:19302"] }, // IMPORTANT en prod : ajoutez TURN (coturn) avec creds // { urls: ["turn:turn.votredomaine.com:3478"], username: "u", credential: "p" } ], }), [] ); useEffect(() => { InCallManager.start({ media: video ? "video" : "audio" }); InCallManager.setSpeakerphoneOn(video); // vidéo -> speaker sessionRef.current = new WebRTCSession( cfg, (candidate) => signaling.send({ type: "webrtc_ice", callId, candidate }), (stream) => setRemoteStream(stream) ); const run = async () => { // Local media const local = await sessionRef.current!.startLocal({ audio: true, video }); setLocalStream(local); // Connect signaling signaling.connect(async (msg: SignalMsg) => { if (msg.callId !== callId) return; if (msg.type === "call_accept") { setStatus("Connecté…"); // Outbound: créer offer const offer = await sessionRef.current!.createOffer(); signaling.send({ type: "webrtc_offer", callId, sdp: offer }); } if (msg.type === "webrtc_offer") { setStatus("Connexion…"); const answer = await sessionRef.current!.acceptOfferAndCreateAnswer(msg.sdp); signaling.send({ type: "webrtc_answer", callId, sdp: answer }); } if (msg.type === "webrtc_answer") { await sessionRef.current!.applyAnswer(msg.sdp); setStatus("En ligne"); } if (msg.type === "webrtc_ice") { await sessionRef.current!.addIceCandidate(msg.candidate); } if (msg.type === "call_reject") { setStatus("Refusé"); endCall(); } if (msg.type === "call_end") { setStatus("Terminé"); endCall(); } }); // Outbound: inviter le correspondant if (direction === "outbound") { signaling.send({ type: "call_invite", callId, to: peerId, from: ME_ID, room: callId, video, }); } }; run().catch((e) => { setStatus("Erreur"); endCall(); }); return () => { try { signaling.close(); } catch {} try { sessionRef.current?.stop(); } catch {} InCallManager.stop(); }; // eslint-disable-next-line react-hooks/exhaustive-deps }, []); const endCall = async () => { try { signaling.send({ type: "call_end", callId, from: ME_ID }); } catch {} try { await sessionRef.current?.stop(); } catch {} navigation.goBack(); }; const acceptInbound = () => { setStatus("Accepté…"); signaling.send({ type: "call_accept", callId, from: ME_ID }); }; const rejectInbound = () => { signaling.send({ type: "call_reject", callId, from: ME_ID, reason: "user_reject" }); endCall(); }; return ( <View style={{ flex: 1 }}> <View style={{ padding: 12 }}> <Text>{status}</Text> <Text>callId: {callId}</Text> </View> <View style={{ flex: 1 }}> {remoteStream ? ( <RTCView streamURL={remoteStream.toURL()} style={{ flex: 1 }} objectFit="cover" /> ) : ( <View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}> <Text>En attente du correspondant…</Text> </View> )} {localStream ? ( <RTCView streamURL={localStream.toURL()} style={{ position: "absolute", right: 12, bottom: 12, width: 120, height: 180 }} objectFit="cover" /> ) : null} </View> <View style={{ flexDirection: "row", justifyContent: "space-around", padding: 12 }}> {direction === "inbound" ? ( <> <Pressable onPress={rejectInbound}><Text>Refuser</Text></Pressable> <Pressable onPress={acceptInbound}><Text>Accepter</Text></Pressable> </> ) : null} <Pressable onPress={endCall}><Text>Raccrocher</Text></Pressable> </View> </View> ); }
4) Permissions Android (obligatoires)
Dans android/app/src/main/AndroidManifest.xml :
<uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permission android:name="android.permission.BLUETOOTH" /> <uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />
Et demande runtime via react-native-permissions (au moment de démarrer l’appel).
5) Points de production à ne pas négliger
- TURN obligatoire (coturn) : sans TURN, beaucoup d’appels échouent derrière NAT/4G.
- Idempotence et états : un appel = machine à états (ringing, accepted, connected, ended).
- Appel entrant : il faut une mécanique de notification (Push) ou au minimum un écran “incoming call” si l’app est au premier plan.
-
Couplage chat/appel : vous pouvez enregistrer dans la conversation un message système :
- “Appel vidéo — 3 min 12 — terminé”
- Sécurité : signaling authentifié (JWT), anti-spam, quotas.
6) Ce dont j’ai besoin implicitement (et que je suppose)
Vous dites que vous avez déjà une app WebRTC : parfait. Il vous faut simplement réutiliser :
- votre serveur de signaling (ou en créer un minimal WS),
- vos STUN/TURN,
- votre logique d’identification utilisateur.