Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Voice Recognition Functionality Does Not Stop When Navigating Between Screens #502

Open
Najiullah-khan opened this issue Jun 8, 2024 · 1 comment

Comments

@Najiullah-khan
Copy link

I am developing a React Native application that uses the @react-native-voice/voice library to implement a voice command interface. There is an issue with the voice recognition functionality when navigating between different screens. Specifically, when transitioning from the Objectdetection screen to the FakeCurrency screen, the voice recognition on Objectdetection stops and starts correctly on FakeCurrency. However, when navigating back to Objectdetection, the voice recognition functionality of the FakeCurrency screen does not stop, resulting in both screens potentially having active voice recognition processes simultaneously. This causes overlapping functionalities and unexpected behavior within the app.

@Soban71
Copy link

Soban71 commented Jun 8, 2024

Yes same issue i am facing as well
below are my code of both modules
import React, {useState, useEffect, useRef, useCallback} from 'react';
import {
StyleSheet,
Text,
View,
Button,
Image,
Alert,
PermissionsAndroid,
Platform,
} from 'react-native';
import Voice from '@react-native-voice/voice';
import {useIsFocused} from '@react-navigation/native';
import RNImmediatePhoneCall from 'react-native-immediate-phone-call';

import {
useCameraPermission,
useMicrophonePermission,
useCameraDevice,
Camera,
} from 'react-native-vision-camera';
import axios from 'axios';
import Tts from 'react-native-tts';
import SQLite from 'react-native-sqlite-storage';

const db = SQLite.openDatabase(
{name: 'contSDB.db', createFromLocation: '~conSDB.db'},
() => {
console.log('Database opened successfully.');
},
error => {
console.error('Error opening database: ');
},
);

function Objectdetection({navigation}) {
var contactArray;
const isFocused = useIsFocused();

const [imag, setimag] = useState(null);
const camera = useRef(null);
const {
hasPermission: cameraPermission,
requestPermission: requestCameraPermission,
} = useCameraPermission();
const {
hasPermission: microphonePermission,
requestPermission: requestMicrophonePermission,
} = useMicrophonePermission();
const cameraDevice = useCameraDevice('back');
const [loading, setLoading] = useState(true);

useEffect(() => {
if (!cameraPermission) {
requestCameraPermission();
}
if (!microphonePermission) {
requestMicrophonePermission();
}
}, [cameraPermission, microphonePermission]);

useEffect(() => {
if (cameraPermission && microphonePermission) {
setLoading(false);
}
}, [cameraPermission, microphonePermission]);

const [initialFocus, setInitialFocus] = useState(true); // Track initial focus

useEffect(() => {
setInitialFocus(true); // Reset initialFocus state when screen is focused
}, [isFocused]);

useEffect(() => {
let intervalId;

const takePhotoInterval = async () => {
  intervalId = setInterval(async () => {
    await takePhoto();
  }, 5000); // Call takePhoto every 2 seconds
};

if (!loading && cameraPermission && microphonePermission && cameraDevice) {
  takePhotoInterval();
}

return () => {
  clearInterval(intervalId); // Clear interval on component unmount
};

}, [loading, cameraPermission, microphonePermission, cameraDevice]);

const takePhoto = async () => {
if (camera.current != null) {
const file = await camera.current.takePhoto({
qualityPrioritization: 'speed',
flash: 'off',
enableShutterSound: false,
});
const image = file://${file.path};
setimag(image);
const formData = new FormData();
formData.append('file', {
uri: image,
name: 'image.jpg',
type: 'image/jpg',
});

  try {
    const response = await axios.post(
      'http://192.168.1.29:5000/predict',
      formData,
      {
        headers: {
          'Content-Type': 'multipart/form-data',
        },
      },
    );
    console.log('Predictions:', response.data.predictions);
    const res = response.data.predictions.toString();

    if (res !== '') {
      // Speak the detected object
      Tts.speak(res);
    } else {
      console.log('No object detected', 'Please try again.');
    }
  } catch (error) {
    console.log('Error uploading image:', error);
    console.log('Error', 'Failed to upload image. Please try again.');
  }
}

};

const [isListening, setIsListening] = useState(false);
const [reloadIntervalId, setReloadIntervalId] = useState(null);

useEffect(() => {
const intervalId = setInterval(() => {
if (!navigation.isFocused()) {
clearInterval(intervalId);
Voice.destroy().then(Voice.removeAllListeners);
return;
}
startListening();
}, 5000);

return () => {
  clearInterval(intervalId);
  Voice.destroy().then(Voice.removeAllListeners);
};

}, [navigation, isFocused]);

useEffect(() => {
const requestPermission = async () => {
try {
if (Platform.OS === 'android') {
const granted = await PermissionsAndroid.request(
PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
{
title: 'Permission to use microphone',
message: 'We need your permission to use your microphone',
buttonNeutral: 'Ask Me Later',
buttonNegative: 'Cancel',
buttonPositive: 'OK',
},
);

      console.log('Permission result:', granted);

      if (granted === PermissionsAndroid.RESULTS.GRANTED) {
        //  startListening();
      } else {
        console.warn('Permission denied');
      }
    }
  } catch (err) {
    console.error('Permission error: ', err);
  }
};

requestPermission();

return () => {
  Voice.destroy().then(Voice.removeAllListeners);
  clearInterval(reloadIntervalId);
};

}, []);

const startListening = async () => {
try {
console.log('Starting voice recognition... in Object detection module');
await Voice.start('en-GB');
setIsListening(true);
} catch (err) {
console.error('Error starting voice recognition: ', err);
setIsListening(false);
}
};

function getContactNumber(contactName) {
// Iterate through each contact in the array
for (var i = 0; i < contactArray.length; i++) {
// Check if the contact_name matches
if (contactArray[i].contact_name == contactName) {
return contactArray[i].contact_number;
}
}
return null;
}

const onSpeechResults = e => {
const spokenText = e.value[0].toLowerCase();
fetchContacts();

console.log('call Spoken Text:', spokenText);
const spokenTextArray = spokenText.split(' ');
console.log('object detection');

console.log('Split Spoken Text:', spokenTextArray);

if (spokenTextArray[0] == 'call') {
  var contactNumber = getContactNumber(
    spokenTextArray[1].charAt(0).toUpperCase() +
      spokenTextArray[1].slice(1),
  );
  if (contactNumber !== null) {
    console.log('Contact number for ' + ' is: ' + contactNumber);
    RNImmediatePhoneCall.immediatePhoneCall(contactNumber);
  } else {
    console.log('Contact not found for ' + spokenTextArray[1]);
  }
}

// if (spokenText === 'object detection') {

//   console.log('Navigating to Objectdetection screen...');
//   navigation.navigate('Objectdetection');
// }

if (spokenText === 'scanner') {
  console.log('Navigating to Textscanner screen...');
  navigation.navigate('Textscanner');
}
if (spokenText === 'help') {
  console.log('Navigating to Google screen...');
  navigation.navigate('Emergencycontact');
}
if (spokenText === 'currency detection') {
  setIsListening(false);
  Voice.destroy().then(Voice.removeAllListeners);
  Voice.stop();
  Voice.cancel();
  navigation.navigate('currency');
}
if (spokenText === 'location') {
  navigation.navigate('Navigation');
}
setIsListening(false);

};

const onSpeechEnd = () => {
setIsListening(false);
};

useEffect(() => {
Voice.onSpeechResults = onSpeechResults;
Voice.onSpeechEnd = onSpeechEnd;

return () => {
  Voice.destroy().then(Voice.removeAllListeners);
};

}, [navigation]);

/**fetching contacts */
const [contacts, setContacts] = useState([]);

useEffect(() => {
fetchContacts();
}, [isFocused]);

const fetchContacts = () => {
db.transaction(tx => {
tx.executeSql(
'SELECT * FROM contacts',
[],
(_, {rows}) => {
const fetchedContacts = rows.raw();
setContacts(fetchedContacts);
// console.log(fetchedContacts);
contactArray = fetchedContacts;
},
error => {
console.log('Error fetching contacts: ', error.message);
},
);
});
};

return (

{loading ? (
Loading...
) : !cameraPermission || !microphonePermission ? (

Please grant camera and microphone permissions to use the app.

) : !cameraDevice ? (
No camera device available.
) : (
<>

</>
)}

);
}

const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
text: {
fontSize: 24,
fontWeight: 'bold',
textAlign: 'center',
},
camera: {
flex: 1,
width: '100%',
},
});

export default Objectdetection;

//This is second mpdule

import React, {useState, useEffect} from 'react';
import {View, Text, StyleSheet} from 'react-native';
import Voice from '@react-native-voice/voice';
import {useIsFocused} from '@react-navigation/native';

function CurrencyDetection({navigation}) {
const isFocused = useIsFocused();
const [isListening, setIsListening] = useState(false);
const [initialFocus, setInitialFocus] = useState(true); // Track initial focus

useEffect(() => {
setInitialFocus(true); // Reset initialFocus state when screen is focused
}, [isFocused]);

useEffect(() => {
if (isFocused) {
const intervalId = setInterval(() => {
startListening();
}, 5000);

  return () => {
    clearInterval(intervalId);
    stopListening();
  };
}

}, [isFocused]);

const startListening = async () => {
try {
console.log('Starting voice recognition... in Currency detection module');
await Voice.start('en-GB');
setIsListening(true);
} catch (err) {
console.error('Error starting voice recognition: ', err);
setIsListening(false);
}
};

const stopListening = async () => {
try {
Voice.destroy().then(Voice.removeAllListeners);
await Voice.stop();
await Voice.cancel();
setIsListening(false);
Voice.removeAllListeners();
console.log('Stopping voice recognition... in Currency detection module');
} catch (err) {
console.error('Error stopping voice recognition: ', err);
}
};

const onSpeechResults = e => {
const spokenText = e.value[0].toLowerCase();
console.log('Spoken Text:', spokenText);
console.log('curency module');

if (spokenText === 'back') {
  stopListening();
  navigation.goBack();
}

};

useEffect(() => {
Voice.onSpeechResults = onSpeechResults;

return () => {
  Voice.destroy().then(Voice.removeAllListeners);
};

}, []);

return (

Currency Detection Screen

);
}

const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
text: {
fontSize: 24,
fontWeight: 'bold',
textAlign: 'center',
},
});

export default CurrencyDetection;

can please anybody please identify me the error please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants