EMOTION BASED MUSIC RECOMMENDATION SYSTEM


 


TABLE OF CONTENT

  • INTRODUCTION

  • LITERATURE SURVEY

  • SYSTEM ANALYSIS

  • FEASIBILITY STUDY

  • SYSTEM REQUIREMENTS

  • SYSTEM DESIGN

  • IMPLEMENTATION

  • SYSTEM TESTING

  • INPUT DESIGN AND OUTPUTDESIGN

  • SCREENSHOTS

  • FUTURE WORK

  • CONCLUSION

  • REFERENCE


                               INTRODUCTION:

A face detection includes classifying image into two classes: one with faces (targets), and other containing the background (clutter) which needs to be removed. Commonalities exist between faces, they vary differently in terms of age, skin color and facial expression, this becomes difficult due to this commonality. The further problem is complicated by differing lighting conditions, image qualities and geometries, partial occlusion and disguise is also a possibility. A face detector should be able to detect the presence of any face under different set of lighting conditions in any background condition. There are six universal expressions according to Ekman they are fear, disgust surprise, anger, sadness and happiness. Face variances can be observed to recognize these expressions. For example, we can say a person is happy which can be identified as a gesture of smile by tightened eyelids and raised lips corners. A person’s internal states, social communication and intentions are indicated by change in facial expressions. Many applications in many areas like human emotions analysis, natural Human computer interaction, image retrieval and talking bots have a large effect on them by automatic facial expression detection. Face Recognition with Histogram of Oriented Gradients using CNN detection has been an impacting issue in the technological community as human beings fined facial expressions one of the most natural and powerful means to express their intentions and emotions. Last stage of the system is facial expression detection. There are basically three steps in training procedure in expression recognition systems named as feature learning, classifier construction and feature selection. Feature learning stage is first, feature selection is second and the last one is classifier construction. Only learned facial expressions variations among all features are extracted after feature learning stage. Facial expression is then represented by the best features which are chosen by feature selection. Not only maximizing inter class variation but they also should try to minimize the intra class variations of expressions not only maximizing inters class variation, but they also should minimize the intra class variations of expressions. Because same expressions of different individuals in image are far from each other in pixel’s space so minimizing the intra class variation of expressions is a problem. Techniques that can be used for facial detection are YOLO, SDD, RCNN, Faster RCNN.

Domain:


MACHINE LEARNING:

Machine learning (ML)is the study of computer algorithms that improve automatically through experience. Itis seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.


Machine learning approaches are traditionally divided into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system: 


                            
Fig 1: Machine Learning outlook

·       Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.

 

·       Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).

 

·       Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximize.

Other approaches have been developed which don't fit neatly into this three-fold categorization, and sometimes more than one is used by the same machine learning system.

DEEP LEARNING:

Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower-level features. Automatically learning features at multiple levels of abstraction allow a system to learn complex functions mapping the input to the output directly from data, without depending completely on human-crafted features. Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features.


                                                Fig 3: Deep Learning

    OpenCV:

OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code

Keras:

Keras was developed and maintained by François Chollet, a Google engineer using four guiding principles:

·       Modularity: A model can be understood as a sequence or a graph alone. All the concerns of a deep learning model are discrete components that can be combined in arbitrary ways.

·       Minimalism: The library provides just enough to achieve an outcome, no frills and maximizing readability.

·       Extensibility: New components are intentionally easy to add and use within the framework, intended for researchers to trial and explore new ideas.

·        Python: No separate model files with custom file formats. Everything is native Python. Keras is designed for minimalism and modularity allowing you to very quickly define deep learning models and run them on top of a Theano or TensorFlow backend.


      LITERATURE SURVEY


1 Face Detection and Facial Expression Recognition System

 

Author: Anagha S. Dhavalikar et al

 

Proposed Automatic Facial Expression recognition system. In This system there are three phase 1. Face detection 2. Feature Extraction and 3. Expression recognition. The First Phase Face Detection are done by RGB Color model, lighting compensation for getting face and morphological operations for retaining required face i.e

eyes and mouth of the face. This System is also used AAM i.e Active Appearance Model Method for facial feature extraction In this method the point on the face like eye, eyebrows and mouth are located and it create a data file which gives information about model points.

detected and detect the face an expression is given as input AAM Model changes according to expression.

 

2.2 Emotional Recognition from Facial Expression Analysis using Bezier Curve Fitting

 

Author : Yong-Hwan Lee, Woori Han and Youngseop Kim

 

Proposed system based on Bezier curve fitting .This system used two step for facial expression and emotion first one is detection and analysis of facial area from input original image and next phase is verification of facial emotion of characteristics feature in the region of interest .The first phase for face detection it uses color still image based on skin color pixel by initialized spatial filtering,based on result of lighting compassion then to estimate face position and facial location of eye and mouth it used feature map After extracting region of interest this system extract points of the feature map to apply Bezier curve on eye and mouth ,for understanding of emotion this system uses training and measuring the difference of Hausdorff distance With Bezier curve between entered face image and from database.

 

 

2.3 Using Animated Mood Pictures in Music Recommendation

 

Author: Arto Lehtiniemi and Jukka Holm et al

 

Arto Lehtiniemi and Jukka Holm et al proposed system on animated mood picture in music recommendation. On this system the user interacts with a collection of images to receive music recommendation with respect to genre of picture. This music recommendation system is developed by Nokia researched center. This system uses textual meta tags for describing the genre and audio signal processing.

 

 

2.4 Human-computer interaction using emotion recognition from facial expression.

 

Author: F. Abdat, C. Maaoui et al and A. Pruski et al

 

F. Abdat, C. Maaoui et al and A. Pruski et al. They proposed a system fully automatic facial expression and recognition system based on three step face detection, facial characteristics extraction and facial expression classification. This system proposed anthropometric model to detect the face feature point combined to shi and Thomasi method. In this method the variation of 21 distances which describe the facial feature from neutral face and the classification base on SVM (Support Vector Machine).

 

 

 

 

2.5 Emotion-based Music Recommendation by Association Discovery from Film Music.

 

Author: Fang-Fei Kuo et al and Suh-Yin Lee et al

Fang-Fei Kuo et al and Suh-Yin Lee et al With the growth of digital music, the development of music recommendation is helpful for users. The existing recommendation approaches are based on the users preference on music. However, sometimes, recommending music according to the emotion is needed. In this, we propose a novel model for emotion-based music recommendation, which is based on the association discovery from film music. We investigated the music feature extraction and modified the affinity graph for association discovery between emotions and music features. Experimental result shows that the proposed approach achieves 85% accuracy in average.

 

 

2.6 Moodplay: Interactive Mood-based Music Discovery and Recommendation

 

Author : Ivana Andjelkovic et al and John O’Donovan et al

 

Ivana Andjelkovic et al and John O’Donovan et al they proposed that a large body of research in recommender systems focuses on optimizing prediction and ranking. However, recent work has highlighted the importance of other aspects of the recommendations, including transparency, control and user experience in general. Building on these aspects, we introduce MoodPlay, a hybrid recommender system music which integrates content and mood-based filtering in an interactive interface. We show how MoodPlay allows the user to explore a music collection by latent affective dimensions, and we explain how to integrate user input at recommendation time with predictions based on a pre-existing user profile. Results of a user study (N=240) are discussed, with four conditions being evaluated with varying degrees of visualization, interaction and control.

 

2.7 An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions

 

Author: Anukriti Dureha et al

Anukriti Dureha et al. In this he proposed Manual segregation of a playlist and annotation of songs, in accordance with the current emotional state of a user, is labor intensive and time consuming. Numerous algorithms have been proposed to automate this process. However, the existing algorithms are slow, increase the overall cost of the system by using additional hardware (e.g. EEG systems and sensors) and have less accuracy. This presents an algorithm that automates the process of generating an audio playlist, based on the facial expressions of a user, for rendering salvage of time and labor, invested in performing the process manually. The algorithm proposed aspires to reduce the overall computational time and the cost of the designed system. It also aims at increasing the accuracy of the designed system. The facial expression recognition module of the proposed algorithm is validated by testing the system against user dependent and user independent dataset.

 

2.8 Enhancing Music Recommender Systems with Personality Information and Emotional States

 

Author: Bruce Ferwerda et al and Markus Schedl et al

 

 Bruce Ferwerda et al and Markus Schedl et al proposed that the initial research assumptions to improve music recommendations by including personality and emotional states. By including these psychological factors, we believe that the accuracy of the recommendation can be enhanced. The system gives attention to how people use music to regulate their emotional states, and how this regulation is related to their personality.


SOFTWARE ENVIRONMENT:


Python is a high-level, interpreted scripting language developed in the late 1980s by Guido van Rossum at the National Research Institute for Mathematics and Computer Science in the Netherlands. The initial version was published at the alt. Sources newsgroup in 1991, and version 1.0 was released in 1994.

Python 2.0 was released in 2000, and the 2.x versions were the prevalent releases until December 2008. At that time, the development team made the decision to release version 3.0, which contained a few relatively small but significant changes that were not backward compatible with the 2.x versions. Pythons 2 and 3 are very similar, and some features of Python 3 have been back ported to Python 2. But in general, they remain not quite compatible.

Both Python 2 and 3 have continued to be maintained and developed, with periodic release updates for both. As of this writing, the most recent versions available are 2.7.15 and 3.6.5. However, an official End of Life date of January 1, 2020 has been established for Python 2, after which time it will no longer be maintained. If you are a newcomer to Python, it is recommended that you focus on Python 3.

Python is still maintained by a core development team at the Institute, and Guido is still in charge, having been given the title of BDFL (Benevolent Dictator for Life) by the Python community. The name Python, by the way, derives not from the snake, but from the British comedy troupe Monty Python’s Flying Circus, of which Guido was, and presumably still is, a fan. It is common to find references to Monty Python sketches and movies scattered throughout the Python documentation.

2.6 WHY CHOOSE PYTHON

If you’re going to write programs, there are literally dozens of commonly used languages to choose from. Why choose Python? Here are some of the features that make Python an appealing choice.

 

Python is Popular

Python has been growing in popularity over the last few years. The 2018 Stack Overflow Developer Survey ranked Python as the 7th most popular and the number one most wanted technology of the year. World-class software development countries around the globe use Python every single day.

According to research by Dice Python is also one of the hottest skills to have and the most popular programming language in the world based on the Popularity of Programming Language Index.

Due to the popularity and widespread use of Python as a programming language, Python developers are sought after and paid well. If you’d like to dig deeper into Python salary statistics and job opportunities, you can do so here.

 

Python is interpreted

Many languages are compiled, meaning the source code you create needs to be translated into machine code, the language of your computer’s processor, before it can be run. Programs written in an interpreted language are passed straight to an interpreter that runs them directly.                                                                 

This makes for a quicker development cycle because you just type in your code and run it, without the intermediate compilation step.

One potential downside to interpreted languages is execution speed. Programs that are compiled into the native language of the computer processor tend to run more quickly than interpreted programs. For some applications that are particularly computationally intensive, like graphics processing or intense number crunching, this can be limiting.

In practice, however, for most programs, the difference in execution speed is measured in milliseconds, or seconds at most, and not appreciably noticeable to a human user. The expediency of coding in an interpreted language is typically worth it for most applications.

 

Python is Free

The Python interpreter is developed under an OSI-approved open-source license, making it free to install, use, and distribute, even for commercial purposes.

A version of the interpreter is available for virtually any platform there is, including all flavors of Unix, Windows, macOS, smart phones and tablets, and probably anything else you ever heard of. A version even exists for the half dozen people remaining who use OS/2.

 

Python is Portable

Because Python code is interpreted and not compiled into native machine instructions, code written for one platform will work on any other platform that has the Python interpreter installed. (This is true of any interpreted language, not just Python.)

 

Python is Simple

As programming languages go, Python is relatively uncluttered, and the developers have deliberately kept it that way.

A rough estimate of the complexity of a language can be gleaned from the number of keywords or reserved words in the language. These are words that are reserved for special meaning by the compiler or interpreter because they designate specific built-in functionality of the language.

Python 3 has 33 keywords, and Python 2 has 31. By contrast, C++ has 62, Java has 53, and Visual Basic has more than 120, though these latter examples probably vary somewhat by implementation or dialect.

Python code has a simple and clean structure that is easy to learn and easy to read. In fact, as you will see, the language definition enforces code structure that is easy to read.                                                                                                                

But It’s Not That Simple For all its syntactical simplicity, Python supports most constructs that would be expected in a very high-level language, including complex dynamic data types, structured and functional programming, and object-oriented programming.

Python has a very easy-to-read syntax. Some of Python's syntax comes from C, because that is the language that Python was written in. But Python uses whitespace to delimit code: spaces or tabs are used to organize code into groups. This is different from C. In C, there is a semicolon at the end of each line and curly braces ({}) are used to group code. Using whitespace to delimit code makes Python a very easy-to-read language.

 

Python use [change / change source]

Python is used by hundreds of thousands of programmers and is used in many

places. Sometimes only Python code is used for a program, but most of the time it is used to do simple jobs while another programming language is used to do more complicated tasks.

Its standard library is made up of many functions that come with Python when it is installed. On the Internet there are many other libraries available that make it possible for the Python language to do more things. These libraries make it a powerful language; it can do many different things.

Some things that Python is often used for are:

·       Web development

·       Scientific programming

·       Desktop GUIs

·       Network programming

·       Game programming

How to Install Python (Environment Set-up)

In this section of the tutorial, we will discuss the installation of python on various operating systems.

Installation on Windows

Visit the link https://www.python.org/downloads/ to download the latest release of Python. In this process, we will install Python 3.6.2 on our Windows operating system.

 

SYSTEM ANALYSIS:

EXISTING SYSTEM

The emotion recognition plays a major role in interaction technology. In interaction technology the verbal components only play a one third of communication and the non-verbal components plays a two third of communication. A facial emotion recognition (FER) method is used for detecting facial expressions. Facial expression plays a major role in expressing what a person feels and it expresses inner feeling and his or her mental situation or human perspective

 

 PROPOSED SYSTEM

 

The human face plays an important role in knowing an individual's mood. Camera is usedto get the required input from the human face. One of the applications of this input can be for extracting the information to deduce the mood of an individual. The “emotion” derived from the input provided earlier are used to get a list of songs. This tedious task of manually Segregating or grouping songs into different lists are reduced and helps in generating an appropriate playlist based on an individual's emotional features. Facial Expression Based Music Player aims at scanning and interpreting the data and accordingly creating a playlist based the parameters provided. Thus our proposed system focus on detecting human emotions for developing emotion based music player, which are the approaches used by available music players to detect emotions, which approach our music player follows to detect human emotions and how it is better to use our system for emotion detection. A brief idea about our systems working, playlist generation and emotion classification is also given. In this project, we used pycharm tool for analysis.

 

 FEASIBILITY STUDY

 

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to Additionally, a very extensive library of classes and functions is available that provides capability well beyond what is built into the language, such as database manipulation or GUI programming.

Python accomplishes what many programming languages don’t: the language itself is simply designed, but it is very versatile in terms of what you can accomplish with it.

Conclusion:

 

This section gave an overview of the Python programming language, including:

  • A brief history of the development of Python
  • Some reasons why you might select Python as your language of choice

Python is a great option, whether you are a beginning programmer looking to learn the basics, an experienced programmer designing a large application, or anywhere in between. The basics of Python are easily grasped, and yet its capabilities are vast. Proceed to the next section to learn how to acquire and install Python on your computer.

Python is an open source programming language that was made to be easy-to-read and powerful. A Dutch programmer named Guido van Rossum made Python in 1991. He named it after the television show Monty Python's Flying Circus. Many Python examples and tutorials include jokes from the show.

Python is an interpreted language. Interpreted languages do not need to be compiled to run. A program called an interpreter runs Python code on almost any kind of computer. This means that a programmer can change the code and quickly see the results. This also means Python is slower than a compiled language like C, because it is not running machine code directly.

Python is a good programming language for beginners. It is a high-level language, which means a programmer can focus on what to do instead of how to do it. Writing programs in Python takes less time than in some other languages.

Python drew inspiration from other programming languages like C, C++JavaPerl, and Lisp.

be carried out. This is to ensure that the proposed system is not a burden to the company.  For feasibility analysis, some understanding of the major requirements for the system is essential.

 

Three key considerations involved in the feasibility analysis are     

¨     ECONOMICAL FEASIBILITY

¨     TECHNICAL FEASIBILITY

¨     SOCIAL FEASIBILITY

 

 ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

 TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system. 

 

 SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.             

 

 SYSTEM REQUIREMENTS

 HARDWARE REQUIREMENTS:

The hardware requirements may serve as the basis for a contract for the implementation of the system and should therefore be a complete and consistent specification of the whole system. They are used by software engineers as the starting point for the system design. It shouls what the system do and not how it should be implemented.

 

       System                             :         Pentium Dual Core.

       Hard Disk                         :         120 GB.

       Monitor                            :         15’’ LED

       Input Devices                   :         Keyboard, Mouse

       Ram                                  :         1 GB

 

 SOFTWARE REQUIREMENTS:

The software requirements document is the specification of the system. It should include both a definition and a specification of requirements. It is a set of what the system should do rather than how it should do it. The software requirements provide a basis for creating the software requirements specification. It is useful in estimating cost, planning team activities, performing tasks and tracking the teams and tracking the team’s progress throughout the development activity.

 

       Operating system             :         Windows 10

       Coding Language             :         python

       Tool                                  :         PyCharm

       Server                               :         Flask

SYSTEM DESIGN    



DATA FLOW DIAGRAM:
  1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of input data to the system, various processing carried out on this data, and the output data is generated by this system.
  2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.
  3. DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.
  4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.

 







UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created by, the Object Management Group.

The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to; or associated with, UML.

      The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other non-software systems.

The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems.

 The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects.

 

GOALS:

     The Primary goals in the design of the UML are as follows:

1.     Provide users a ready-to-use, expressive visual modeling Language so that they can develop and exchange meaningful models.

2.     Provide extendibility and specialization mechanisms to extend the core concepts.

3.     Be independent of particular programming languages and development process.

4.     Provide a formal basis for understanding the modeling language.

5.     Encourage the growth of OO tools market.

6.    Integrate best practices.


USE CASE DIAGRAM:

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases. The main purpose of a use case diagram is to show what system functions are performed for which actor. Roles of the actors in the system can be depicted.


FLOW DIAGRAM:

Flowcharts are used in designing and documenting simple processes or programs. Like other types of diagrams, they help visualize what is going on and thereby help understand a process, and perhaps also find less-obvious features within the process, like flaws and bottlenecks. There are different types of flowcharts: each type has its own set of boxes and notations. The two most common types of boxes in a flowchart are:

•           A processing step, usually called activity, and denoted as a rectangular box.

•           A decision, usually denoted as a diamond.

 A flowchart is described as "cross-functional" when the chart is divided into different vertical or horizontal parts, to describe the control of different organizational units. A symbol appearing in a particular part is within the control of that organizational correctly locate the responsibility for unit. Across-functional flow allows the author to correctly locate the responsibility for performing an action or making a decision, and to show the responsibility of each organizational unit for different parts of a single process.


CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modelling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information.



SEQUENCE DIAGRAM:

A sequence diagram in Unified Modelling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called.

event diagrams, event scenarios, and timing diagrams.


ACTIVITY DIAGRAM:

Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modelling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.


IMPLEMENTATION:

MODULES:

In this application I am uploading image and then using python OPENCV i am pre-processing image to extract features and then this features is applied on SVM/Deep Learning Neural Network Training Model to predict moods of user and based on user mood all songs will be detected and shown in drop down box and user can select any song and play.

All sample images are in images folder and all songs are in songs folder and u too can include new songs to that folder and given name as happy1.mp3, happy2, happy3 or sad1, sad2 etc. Like this for all categories you can add songs. Currently i am using same song for all moods.

 

A.    FACE DETECTION

B.    EMOTION CLASSIFICATION

C.    MUSIC RECOMMENDATION

 

MODULES DESCRIPTION:

A.   FACE DETECTION:

 

The main objective of face detection technique is to identify the face in the frame by reducing the external noises and other factors. The steps involved in the FACE DETECTION PROCESS are 1. Image pyramid 2. Histogram of Oriented Gradients 3. Linear Classifier The data that are obtained are decomposed into the sampling image using image pyramid into multiple scales. The use of this technique is simply to extract features while reducing the noise and the other factors. The low pass image pyramid technique (also known as Gaussian pyramid) consists of smoothing the frame and subsampling it by decreasing its resolution, the process needs to be repeated a few times in order to obtain a perfect result that in the end of the process we obtain a frame similar to the original one but with a decreased resolution and an increased smoothing level.

B.   EMOTION CLASSIFICATION:

 

When the face is successfully detected, a bounding box will be applied as an overlay on the image to extract the ROI (face) for further analysis. The extracted ROI will next be processed using the “Predictor” function which is also a called script to extract the 68 facial landmark points and save them in an array. Next, the data stored in the features array will be put in as an input into a PCA reduction code that will reduce the size of data and eliminate any correlated coordinates leaving only the necessary points as principal components. The data is a 68x2 array; 68 points, each point with coordinates on x-axes and y-axes. The array will be converted into a vector containing 136 row and 1 column. The facial landmark extraction code “Predictor” is trained with a set of images and landmark maps for each image. Figure No. 5 Flow diagram of the module - Emotion Classification The code learns how to extract the facial landmark map of a given face image based on the pixel’s intensity values indexed of each point using regression trees trained with gradient boosting algorithm. After the PCA reduction operation, the obtained data will be used for classification. A multiclass SVM with a linear kernel is employed to compare the inputted data with stored one to see in what class (emotion) it belongs. If one of the three emotions anger, fear, or surprise is detected a speed decreasing command will be executed to reduce the speed of the wheelchair to prevent the user from endangerment.

 

C.   MUSIC RECOMMENDATION:

 

The input is acquired in real-time so the camera is used to capture the video and then the framing are done. The hidden markov model classification are used for processing the framed images. The frames that are obtained are considered in all frames and all pixel formats for the purpose of emotion classification . The value of each landmark in the face is calculated and is stored for future use. The efficiency of classifier is about 90-95%. so that even when there is any changes in the face due to environmental conditions the system can still dentify the face and the emotion being expressed .The emotions are then identified using the values that are obtained that are being set and from the value of the pixel that is received is being compared to that of the values that is present as threshold in the code. The values is transferred to the web service. The song are played from the emotion detected.

 

 OpenCv

OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc.

                         OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, Video Surf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching street view images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects atWillow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS.

                         OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDA and OpenCV interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a template interface that works seamlessly with STL containers.

SYNTAX: pip install opencv


Keras

Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear & actionable error messages.

                         It also has extensive documentation and developer guides. Keras contains numerous implementations of commonly used neural network building blocks such as layers, objectives, activation functions, optimizers, and a host of tools to make working with image and text data easier to simplify the coding necessary for writing deep neural network code. The code is hosted on GitHub, and community support forums include the GitHub issues page, and a Slack channel. Keras is a minimalist Python library for deep learning that can run on top of Theano or Tensor Flow.

                         It was developed to make implementing deep learning models as fast and easy as possible for research and development. It runs on Python 2.7 or 3.5 and can seamlessly execute on GPUs and CPUs given the underlying frameworks. It is released under the permissive MIT license. Keras was developed and maintained by François Chollet, a Google engineer using four guiding principles:

·       Modularity: A model can be understood as a sequence or a graph alone. All the concerns of a deep learning model are discrete components that can be combined in arbitrary ways.

·       Minimalism: The library provides just enough to achieve an outcome, no frills and maximizing readability.

·       Extensibility: New components are intentionally easy to add and use within the framework, intended for researchers to trial and explore new ideas.

·       Python: No separate model files with custom file formats. Everything is native Python. Keras is designed for minimalism and modularity allowing you to very quickly define deep learning models and run them on top of a Theano or TensorFlow backend.

SYNTAX: pip install keras

ALGORITHMS              

Convolutional Neural Network (ConvNet/CNN) :

Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. While in primitive methods filters are hand-engineered, with enough training, ConvNets have the ability to learn these filters/characteristics.

The architecture of a ConvNet is analogous to that of the connectivity pattern of Neurons in the Human Brain and was inspired by the organization of the Visual Cortex. Individual neurons respond to stimuli only in a restricted region of the visual field known as the Receptive Field. A collection of such fields overlap to cover the entire visual area.

A ConvNet is able to successfully capture the Spatial and Temporal dependencies in an image through the application of relevant filters. The architecture performs a better fitting to the image dataset due to the reduction in the number of parameters involved and reusability of weights. In other words, the network can be trained to understand the sophistication of the image better.

The role of the ConvNet is to reduce the images into a form which is easier to process, without losing features which are critical for getting a good prediction. This is important when we are to design an architecture which is not only good at learning features but also is scalable to massive datasets.

Support Vector Machine (SVM)

 “Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classification problems. In the SVM algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the hyper-plane that differentiates the two classes very well (look at the below snapshot).

Support Vectors are simply the co-ordinates of individual observation. The SVM classifier is a frontier which best segregates the two classes (hyper-plane/ line).

Pros and Cons associated with SVM

  • Pros:
    • It works really well with a clear margin of separation
    • It is effective in high dimensional spaces.
    • It is effective in cases where the number of dimensions is greater than the number of samples.
    • It uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
  • Cons:
    • It doesn’t perform well when we have large data set because the required training time is higher
    • It also doesn’t perform very well, when the data set has more noise i.e. target classes are overlapping
    • SVM doesn’t directly provide probability estimates, these are calculated using an expensive five-fold cross-validation. It is included in the related SVC method of Python scikit-learn library.

 

SAMPLE CODE

from tkinter import messagebox

from tkinter import *

from tkinter import simpledialog

import tkinter

from tkinter import filedialog

from imutils import paths

import numpy as np

from collections import defaultdict

from tkinter.filedialog import askopenfilename

from tkinter import simpledialog

from keras.preprocessing.image import img_to_array

from keras.models import load_model

import imutils

import cv2

import numpy as np

import sys

from tkinter import ttk

import os

from playsound import playsound

main = tkinter.Tk()

main.title("EMOTION BASED MUSIC RECOMMENDATION SYSTEM")

main.geometry("1200x1200")

global value

global filename

global faces

global frame

detection_model_path = 'haarcascade_frontalface_default.xml'

emotion_model_path = '_mini_XCEPTION.106-0.65.hdf5'

face_detection = cv2.CascadeClassifier(detection_model_path)

emotion_classifier = load_model(emotion_model_path, compile=False)

EMOTIONS = ["angry","disgust","scared", "happy", "sad", "surprised","neutral"]

 

def upload():

    global filename

    global value

    filename = askopenfilename(initialdir = "images")

    pathlabel.config(text=filename)

       

def preprocess():

    global filename

    global frame

    global faces

    text.delete('1.0', END)

    orig_frame = cv2.imread(filename)

    orig_frame = cv2.resize(orig_frame, (48, 48))

    frame = cv2.imread(filename,0)

    faces = face_detection.detectMultiScale(frame,scaleFactor=1.1,minNeighbors=5,minSize=(30,30),flags=cv2.CASCADE_SCALE_IMAGE)

    text.insert(END,"Total number of faces detected : "+str(len(faces)))

   

def detectEmotion():

    global faces

    if len(faces) > 0:

       faces = sorted(faces, reverse=True,key=lambda x: (x[2] - x[0]) * (x[3] - x[1]))[0]

       (fX, fY, fW, fH) = faces

       roi = frame[fY:fY + fH, fX:fX + fW]

       roi = cv2.resize(roi, (48, 48))

       roi = roi.astype("float") / 255.0

       roi = img_to_array(roi)

       roi = np.expand_dims(roi, axis=0)

       preds = emotion_classifier.predict(roi)[0]

       emotion_probability = np.max(preds)

       label = EMOTIONS[preds.argmax()]

       messagebox.showinfo("Emotion Prediction Screen","Emotion Detected As : "+label)

       value.clear()

       path = 'songs'

       for r, d, f in os.walk(path):

          for file in f:

             if file.find(label) != -1:

                value.append(file)

    else:

       messagebox.showinfo("Emotion Prediction Screen","No face detceted in uploaded image")

 

def playSong():

    name = songslist.get()

    playsound('songs/'+name)

   

font = ('times', 20, 'bold')

title = Label(main, text='EMOTION BASED MUSIC RECOMMENDATION SYSTEM USING WEARABLE PHYSIOLOGICAL SENSORS')

title.config(bg='brown', fg='white') 

title.config(font=font)          

title.config(height=3, width=80)      

title.place(x=5,y=5)

 

font1 = ('times', 14, 'bold')

upload = Button(main, text="Upload Image With Face", command=upload)

upload.place(x=50,y=100)

upload.config(font=font1) 

 

pathlabel = Label(main)

pathlabel.config(bg='brown', fg='white') 

pathlabel.config(font=font1)          

pathlabel.place(x=300,y=100)

 

preprocessbutton = Button(main, text="Preprocess & Detect Face in Image", command=preprocess)

preprocessbutton.place(x=50,y=150)

preprocessbutton.config(font=font1)

 

emotion = Button(main, text="Detect Emotion", command=detectEmotion)

emotion.place(x=50,y=200)

emotion.config(font=font1)

 

emotionlabel = Label(main)

emotionlabel.config(bg='brown', fg='white') 

emotionlabel.config(font=font1)          

emotionlabel.place(x=610,y=200)

emotionlabel.config(text="Predicted Song")

 

value = ["Song List"]

songslist = ttk.Combobox(main,values=value,postcommand=lambda: songslist.configure(values=value))

songslist.place(x=760,y=210)

songslist.current(0)

songslist.config(font=font1) 

 

playsong = Button(main, text="Play Song", command=playSong)

playsong.place(x=50,y=250)

playsong.config(font=font1)

 

font1 = ('times', 12, 'bold')

text=Text(main,height=10,width=150)

scroll=Scrollbar(text)

text.configure(yscrollcommand=scroll.set)

text.place(x=10,y=300)

text.config(font=font1)

 

main.config(bg='brown')

main.mainloop()

 

 

 

 SYSTEM TESTING

 

The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

 

TYPES OF TESTS

 

Unit testing:

          Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

 

Integration testing:

Integration tests are designed to test integrated software components to determine if they actually run as one program.  Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at   exposing the problems that arise from the combination of components.

Functional test:

Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input               :  identified classes of valid input must be accepted.

Invalid Input             : identified classes of invalid input must be rejected.

Functions                  : identified functions must be exercised.

Output                     : identified classes of application outputs must be exercised.

Systems/Procedures    : interfacing systems or procedures must be invoked.

 

Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.

 

System Test:

     System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

 

White Box Testing:

        White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.

Black Box Testing:

        Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.

 

 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases.

 

Test strategy and approach:

          Field testing will be performed manually and functional tests will be written in detail.

 

Test objectives:

·       All field entries must work properly.

·       Pages must be activated from the identified link.

·       The entry screen, messages and responses must not be delayed.

 

 

Features to be tested

·       Verify that the entries are of the correct format

·       No duplicate entries should be allowed

·       All links should take the user to the correct page.

 

 Integration Testing

Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.

            The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error.

 

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

 

 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.

 

Test Results:

 All the test cases mentioned above passed successfully. No defects encountered.

 


 INPUT DESIGN AND OUTPUT DESIGN


9.1 INPUT DESIGN:

 

The input design is the link between the information system and the user. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processingcan be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. The design of input focuses on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. Input Design considered the following things:

Ø  What data should be given as input?

Ø  How the data should be arranged or coded?

Ø  The dialog to guide the operating personnel in providing input.

Ø Methods for preparing input validations and steps to follow when error occur.

 

OBJECTIVES:

1. Input Design is the process of converting a user-oriented description of the input into a computer-based system. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system.

2.It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The goal of designing input is to make data entry easier and to be free from errors. The data entry screen is designed in such a way that all the data manipulates can be performed. It also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with the help of screens. Appropriate messages are provided as when needed so that the user  will not be in maize of instant. Thus the objective of input design is to create an input layout that is easy to follow

 

 OUTPUT DESIGN:

 

A quality output is one, which meets the requirements of the end user and presents the information clearly. In any system results of processing are communicated to the users and to other system through outputs. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output. It is the most important and direct source information to the user. Efficient and intelligent output design improves the system’s relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When analysis design computer output, they should Identify the specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following objectives.

v  Convey information about past activities, current status or projections of the

v  Future.

v  Signal important events, opportunities, problems, or warnings.

v  Trigger an action.

v  Confirm an action.

 

SCREENSHOTS:

                                      image Uploading

Detecting face in image

Detecting emotions

Playing song


FUTURE ENHANCEMENT.

The music player based on facial recognition system is highly essential for all the person in modern day life ecology. This system is further enhanced with benefit able features for upgrading in future. The methodology of enhancement in the automatic play of songs are done by detection of the facial expression. The facial expression is detected by programming interface with the RPI camera. An alternative method, based on additional emotions which is excluded in our system as disgust and fear. On this emotion included to support the playing of music automatically.


CONCLUSION

 

In this project, we presented a model to recommend a music based on the emotion based detected from the facial expression. This project proposed designed & developed an emotion based music recommendation system using face recognition System. Music are the one that has the power to heal any stress or any kind of emotions. Recent development promises a wide scope in developing emotion-based music recommendation system. Thus, the proposed system presents Face based emotion recognition system to detect the emotions and play music from the emotion detected.

REFERENCES

[1] Jumani, S.Z., Ali, F., Guriro, S., Kandhro, I.A., Khan, A. and Zaidi, A., 2019. Facial Expression Recognition with Histogram of Oriented Gradients using CNN. Indian Journal of Science and Technology, 12, p.24.

[2] J. R. Barr, L. A. Cament, K. W. Bowyer, and P. J. Flynn. Active clustering with ensembles for social structure extraction. In Winter Conference on Applications of Computer Vision, pages 969–976, 2014

 [3] Ren, S., He, K., Girshick, R. and Sun, J., 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99).

[4] Ren, S., He, K., Girshick, R. and Sun, J., 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91-99).R. Goh, L. Liu, X. Liu, and T. Chen. The CMU face in action (FIA) database. In International Conference on Analysis and Modelling of Faces and Gestures, pages 255–263. 2005.

 [5] L. Wolf, T. Hassner, and I. Maoz. Face recognition in unconstrained videos with matched background similarity. In Computer Vision and Pattern Recognition, pages 529–534, 2011

 [6] Y. Wong, S. Chen, S. Mau, C. Sanderson, and B. C. Lovell. Patchbased probabilistic image quality assessment for face selection and improved video-based face recognition. In Computer Vision and Pattern Recognition Workshops, pages 74–81, 2011.

[7] B. R. Beveridge, P. J. Phillips, D. S. Bolme, B. A. Draper, G. H. Givens, Y. M. Lui, M. N. Teli, H. Zhang, W. T. Scruggs, K. W. Bowyer, P. J. Flynn, and S. Cheng. The challenge of face recognition from digital point-and-shoot cameras. In Biometrics: Theory Applications and Systems, pages 1–8, 2013

[8] N. D. Kalka, B. Maze, J. A. Duncan, K. A. O Connor, S. Elliott, K. Hebert, J. Bryan, and A. K. Jain. IJB–S: IARPA Janus Surveillance Video Benchmark. In IEEE International Conference on Biometrics: Theory, Applications, and Systems, 2018.

[9] M. Singh, S. Nagpal, N. Gupta, S. Gupta, S. Ghosh, R. Singh, and M. Vatsa. Cross-spectral cross-resolution video database for face recognition. In IEEE International Conference on Biometrics Theory, Applications and Systems, 2016 [10] X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localization in the wild,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2012, pp. 2879–2886.

 [11] D. Chen, S. Ren, Y. Wei, X. Cao, and J. Sun, “Joint cascade face detection and alignment,” in Proc. Eur. Conf. Comput. Vis., 2014, vol. 8694, pp. 109–122. [12] D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in Proc. IEEE Int. Conf. Comput. Vis., 2015, pp. 2650– 2658

 [13] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 1097–1105.

[14] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.

 [15] C. Farabet, C. Couprie, L. Najman, Y. LeCun, Learning hierarchical features for scene labeling, IEEE Trans. Pattern Anal. Mach. Intell. 35 (8) (2013) 1915–1929. [16] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, A. Zisserman, The pascal visual object classes (voc) challenge, Int. J. Comput. Vis. 88 (2) (2010) 303–338.

 [17] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 580–587.

[18] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: Integrated recognition, localization and detection using convolutional networks, International Conference on Learning Representations.

[19] DESivaram, M., Porkodi, V., Mohammed, A.S. and Manikandan, V., 2019. Detection Of Accurate Facial Detection Using Hybrid Deep Convolutional Recurrent Neural Network. ICTACT Journal on Soft Computing, 9(2).

 [20] Sivaram, M., Porkodi, V., Mohammed, A.S. and Manikandan, V., 2019. Detection of Accurate Facial Detection Using Hybrid Deep Convolutional Recurrent Neural Network. ICTACT Journal on Soft Computing, 9(2).

[21] Shafiee, M.J., Chywl, B., Li, F. and Wong, A., 2017. Fast YOLO: a fast you only look once system for real-time embedded object detection in video. arXiv preprint arXiv:1709.05943.

[22] Ravidas, S. and Ansari, M.A., 2019. An Efficient Scheme of Deep Convolution Neural Network for Multi View Face Detection. International Journal of Intelligent Systems and Applications, 11(3), p.53.

[23] Gupta, S., Gupta, N., Ghosh, S., Singh, M., Nagpal, S., Vatsa, M. and Singh, R., 2019. FaceSurv: A benchmark video dataset for face detection and recognition across spectra and resolutions. challenge, 14, p.18.

 [24] Guo, G., Wang, H., Yan, Y., Zheng, J. and Li, B., 2019. A Fast Face Detection Method via Convolutional Neural Network. Neurocomputing.






TEAM MEMBERS


VALUPADASU SATHWIKA
2003A52019
contact:8074208204.
EMMADI GAYATHRI
2003A52031
contact:7386266296.

PRANATHI VADDIRAJU
2003A52041
contact:8374894094.

REVURI NAGARAJ REDDY
2003A52042
contact:8978387115.




ADDANKI THIRUMALA SATHWIKA
2003A52086
contact:9502351250.




























Comments