diff --git a/Emotion based music player/Dataset/README.md b/Emotion based music player/Dataset/README.md
new file mode 100644
index 000000000..1d59884f4
--- /dev/null
+++ b/Emotion based music player/Dataset/README.md
@@ -0,0 +1,17 @@
+
Emotion Based Music Player
+
+### Goal ๐ฏ
+The objective of the emotion-based music player project is to create an intelligent system that detects and analyzes users' emotions in real-time through techniques like facial recognition, voice analysis, or biosensors. Based on the detected emotional state, the player automatically curates and adjusts music playlists to enhance the user's mood and provide a personalized listening experience. The system aims to reduce the burden of manual song selection, adapt to emotional changes dynamically, and offer privacy-conscious and culturally relevant music suggestions, while giving users the flexibility to override or customize the music based on their preferences.
+
+### Model(s) used for the Web App ๐งฎ
+
+The models and technologies used in the emotion-based music player project include:
+
+1. Pretrained Keras Model (model.h5): A deep learning model, likely a Convolutional Neural Network (CNN), is loaded to predict emotions based on processed facial landmarks and hand movements.
+
+2. Mediapipe Library: Mediapipe is used for extracting facial landmarks and hand landmarks, which serve as input features for emotion recognition. It captures key points from the user's face and hands for emotion detection.
+
+3. Streamlit and WebRTC: Used for the web interface and real-time video streaming, capturing the users face for emotion recognition through a web camera.
+
+4. The project leverages deep learning (Keras) and computer vision (Mediapipe) to detect emotions based on facial and hand landmark data, then uses the model to predict the emotion, which influences the music recommendation. โ
+
diff --git a/Emotion based music player/Dataset/emotion.npy b/Emotion based music player/Dataset/emotion.npy
new file mode 100644
index 000000000..bf1612af7
Binary files /dev/null and b/Emotion based music player/Dataset/emotion.npy differ
diff --git a/Emotion based music player/Dataset/labels.npy b/Emotion based music player/Dataset/labels.npy
new file mode 100644
index 000000000..4dd5b2f19
Binary files /dev/null and b/Emotion based music player/Dataset/labels.npy differ
diff --git a/Emotion based music player/Images/Capture.png b/Emotion based music player/Images/Capture.png
new file mode 100644
index 000000000..5610b8f2f
Binary files /dev/null and b/Emotion based music player/Images/Capture.png differ
diff --git a/Emotion based music player/Images/Information.png b/Emotion based music player/Images/Information.png
new file mode 100644
index 000000000..cf5b17ae9
Binary files /dev/null and b/Emotion based music player/Images/Information.png differ
diff --git a/Emotion based music player/Images/Output.png b/Emotion based music player/Images/Output.png
new file mode 100644
index 000000000..1f39e0b83
Binary files /dev/null and b/Emotion based music player/Images/Output.png differ
diff --git a/Emotion based music player/Images/emotion.jpg b/Emotion based music player/Images/emotion.jpg
new file mode 100644
index 000000000..162888ac9
Binary files /dev/null and b/Emotion based music player/Images/emotion.jpg differ
diff --git a/Emotion based music player/Images/open page.png b/Emotion based music player/Images/open page.png
new file mode 100644
index 000000000..7630c9fd8
Binary files /dev/null and b/Emotion based music player/Images/open page.png differ
diff --git a/Emotion based music player/Model/README.md b/Emotion based music player/Model/README.md
new file mode 100644
index 000000000..cd0f9c372
--- /dev/null
+++ b/Emotion based music player/Model/README.md
@@ -0,0 +1,49 @@
+Project Title: Emotion-Based Music
+๐ฏ Goal
+---The main goal of this project is to create a web application that recommends music based on the user's emotions. This is achieved by using a model that classifies different emotions.
+
+๐งต Dataset
+---The dataset used in this project are emotion.npy and label.npy which are typically used to store NumPy arrays, which may contain data such as numerical values, model weights, or feature sets, possibly for your emotion-based music player project..
+
+๐งพ Description
+---This project utilizes MediaPipe, Keras, OpenCV, and Streamlit to build a web application. The application captures webcam input to detect emotions and recommend music accordingly. The project is explained in detail in a video linked in the README.
+
+๐งฎ What I Had Done!
+---Developed a model to classify emotions using a live emoji project code.
+---Created a web application using Streamlit and Streamlit-webrtc for webcam capture.
+---Integrated the emotion classification model into the web application for music recommendation.
+
+๐ Models Implemented
+1. Pretrained Deep Learning Model (model.h5):
+
+-->Likely a Convolutional Neural Network (CNN) used for emotion recognition, especially in processing facial and hand landmarks for detecting user emotions.
+-->The model is loaded using Keras' load_model function, indicating it's a neural network trained on emotion-labeled data.
+2. Mediapipe's Holistic and Hands Models:
+
+-->Mediapipe Holistic: Used for detecting key facial and body landmarks.
+-->Mediapipe Hands: Used for detecting hand landmarks to infer gestures that may also be used for emotion recognition.
+
+๐ Libraries Needed
+MediaPipe
+Keras
+OpenCV
+Streamlit
+Streamlit-webrtc
+
+๐ Exploratory Data Analysis Results
+Exploratory Data Analysis (EDA) involved examining the distribution of the dataset, visualizing sample images, and understanding the different classes of facial expressions. The dataset are set to evaluate the model's performance effectively.
+
+๐ Performance of the Models based on the Accuracy Scores
+---Final Accuracy = 58.33%, Validation Accuracy = 54.99%
+c:\Users\DELL\Downloads\model_training_results.JPG
+
+
+๐ข Conclusion
+---The emotion-based music player successfully integrates deep learning and computer vision techniques to create a personalized, emotion-driven music experience. By leveraging facial expression and hand gesture recognition through Mediapipe, combined with a pretrained deep learning model, the system can detect the user's emotional state in real-time. This allows for dynamic music recommendations that adapt to the user's mood, enhancing the listening experience.
+
+The project demonstrates how artificial intelligence can transform user interaction with media, making it more intuitive, personalized, and engaging. With future improvements, such as more advanced emotion recognition and enhanced music recommendations, this system could revolutionize how users interact with digital content, making it more emotionally responsive and contextually aware.
+
+โ๏ธ Your Signature
+Nadipudi Shanmukhi satya
+github : https://github.com/shanmukhi-developer
+linkedin : https://www.linkedin.com/in/nadipudi-shanmukhi-satya-6904a0242/
diff --git a/Emotion based music player/Model/model.h5 b/Emotion based music player/Model/model.h5
new file mode 100644
index 000000000..1761732eb
Binary files /dev/null and b/Emotion based music player/Model/model.h5 differ
diff --git a/Emotion based music player/Requirements.txt b/Emotion based music player/Requirements.txt
new file mode 100644
index 000000000..586f7c10d
--- /dev/null
+++ b/Emotion based music player/Requirements.txt
@@ -0,0 +1,13 @@
+*Requirements for Running the Project*
+
+Python 3.x
+Python libraries:
+1. streamlit
+2. streamlit-webrtc
+3. opencv-python
+4. mediapipe
+5. keras
+6. numpy
+
+-->A pre-trained Keras model (model.h5) and a NumPy labels file (labels.npy), both included in the project.
+-->A webcam to capture live video input.
\ No newline at end of file
diff --git a/Emotion based music player/Web apps/README.md b/Emotion based music player/Web apps/README.md
new file mode 100644
index 000000000..5cffcd4f6
--- /dev/null
+++ b/Emotion based music player/Web apps/README.md
@@ -0,0 +1,17 @@
+Emotion Based Music Player
+Goal ๐ฏ
+---The main goal of this project is to create a web application that recommends music based on the user's emotions. This is achieved by using a model that classifies different emotions.
+
+Model(s) used for the Web App ๐งฎ
+1. Pretrained Deep Learning Model (model.h5):
+
+-->Likely a Convolutional Neural Network (CNN) used for emotion recognition, especially in processing facial and hand landmarks for detecting user emotions.
+-->The model is loaded using Keras' load_model function, indicating it's a neural network trained on emotion-labeled data.
+2. Mediapipe's Holistic and Hands Models:
+
+-->Mediapipe Holistic: Used for detecting key facial and body landmarks.
+-->Mediapipe Hands: Used for detecting hand landmarks to infer gestures that may also be used for emotion recognition
+
+
+Signature โ๏ธ
+Nadipudi Shanmukhi satya
diff --git a/Emotion based music player/Web apps/music.py b/Emotion based music player/Web apps/music.py
new file mode 100644
index 000000000..0928fec9b
--- /dev/null
+++ b/Emotion based music player/Web apps/music.py
@@ -0,0 +1,102 @@
+import streamlit as st
+from streamlit_webrtc import webrtc_streamer
+import av
+import cv2
+import numpy as np
+import mediapipe as mp
+from keras.models import load_model
+import webbrowser
+
+model = load_model("model.h5")
+label = np.load("labels.npy")
+holistic = mp.solutions.holistic
+hands = mp.solutions.hands
+holis = holistic.Holistic()
+drawing = mp.solutions.drawing_utils
+
+st.header("Emotion Based Music Recommender")
+
+if "run" not in st.session_state:
+ st.session_state["run"] = "true"
+
+try:
+ emotion = np.load("emotion.npy")[0]
+except:
+ emotion=""
+
+if not(emotion):
+ st.session_state["run"] = "true"
+else:
+ st.session_state["run"] = "false"
+
+class EmotionProcessor:
+ def recv(self, frame):
+ frm = frame.to_ndarray(format="bgr24")
+
+ ##############################
+ frm = cv2.flip(frm, 1)
+
+ res = holis.process(cv2.cvtColor(frm, cv2.COLOR_BGR2RGB))
+
+ lst = []
+
+ if res.face_landmarks:
+ for i in res.face_landmarks.landmark:
+ lst.append(i.x - res.face_landmarks.landmark[1].x)
+ lst.append(i.y - res.face_landmarks.landmark[1].y)
+
+ if res.left_hand_landmarks:
+ for i in res.left_hand_landmarks.landmark:
+ lst.append(i.x - res.left_hand_landmarks.landmark[8].x)
+ lst.append(i.y - res.left_hand_landmarks.landmark[8].y)
+ else:
+ for i in range(42):
+ lst.append(0.0)
+
+ if res.right_hand_landmarks:
+ for i in res.right_hand_landmarks.landmark:
+ lst.append(i.x - res.right_hand_landmarks.landmark[8].x)
+ lst.append(i.y - res.right_hand_landmarks.landmark[8].y)
+ else:
+ for i in range(42):
+ lst.append(0.0)
+
+ lst = np.array(lst).reshape(1,-1)
+
+ pred = label[np.argmax(model.predict(lst))]
+
+ print(pred)
+ cv2.putText(frm, pred, (50,50),cv2.FONT_ITALIC, 1, (255,0,0),2)
+
+ np.save("emotion.npy", np.array([pred]))
+
+
+ drawing.draw_landmarks(frm, res.face_landmarks, holistic.FACEMESH_TESSELATION,
+ landmark_drawing_spec=drawing.DrawingSpec(color=(0,0,255), thickness=-1, circle_radius=1),
+ connection_drawing_spec=drawing.DrawingSpec(thickness=1))
+ drawing.draw_landmarks(frm, res.left_hand_landmarks, hands.HAND_CONNECTIONS)
+ drawing.draw_landmarks(frm, res.right_hand_landmarks, hands.HAND_CONNECTIONS)
+
+
+ ##############################
+
+ return av.VideoFrame.from_ndarray(frm, format="bgr24")
+
+lang = st.text_input("Language")
+singer = st.text_input("singer")
+choose = st.text_input("Select")
+if lang and singer and st.session_state["run"] != "false":
+ webrtc_streamer(key="key", desired_playing_state=True,
+ video_processor_factory=EmotionProcessor)
+btn = st.button("Recommend me ")
+
+if btn:
+ if not emotion:
+ st.warning("Please let me capture your emotion first")
+ st.session_state["run"] = "true"
+ elif choose == "youtube":
+ webbrowser.open(f"https://www.youtube.com/results?search_query={lang}+{emotion}+song+{singer}")
+ else:
+ webbrowser.open(f"https://open.spotify.com/search/{lang}%20{emotion}%20songs%20{singer}")
+ np.save("emotion.npy", np.array([""]))
+ st.session_state["run"] = "false"
\ No newline at end of file
diff --git a/Emotion based music player/Web apps/tempCodeRunnerFile.py b/Emotion based music player/Web apps/tempCodeRunnerFile.py
new file mode 100644
index 000000000..bfdde91dd
--- /dev/null
+++ b/Emotion based music player/Web apps/tempCodeRunnerFile.py
@@ -0,0 +1,8 @@
+import streamlit as st
+from streamlit_webrtc import webrtc_streamer
+import av
+import cv2
+import numpy as np
+import mediapipe as mp
+from keras.models import load_model
+import webbrowser
\ No newline at end of file