The power is taken from SMPS of 12V. Pin 3 and pin6 of L293D are given to pin number 1 and pins 14 and 11 are given to the second motor that is connected to the raspberry pi. 4.1.6 L293D driver IC Figure 4.6: L293D driver IC Details of L293D IC The Voltage Range varies between: DC4.5V to 36V. Separate input Logic supply is separate. Thermal cutoff. ESD Production. The Output current rating: 1 per channel (600mA for L293D). Operation of L293D. The TTL inputs are compatible with the output pole drive circuit. With the drivers in this IC enabled in pairs. When the enabled inputs are high and the drivers associated with them are enabled. The outputs will be active and will be in phase with the impedance. When the input that is enabled is low, the outputs goes to cutoff. With proper inputs given the outputs forms a bridge reversible drive that are suitable for DC motors. 4.1.7 ULN2003 Driver IC Figure 4.7: ULN2003 Driver integrated circuit “ULN2003” is a driver circuit IC which uses magnetic difference with high voltage and high current Darlington array. It can be used when there is a requirement of a low voltage circuit that will switch from low voltage to a high voltage that is connected to power supply main. The relay coil required to run the relay is integrated with the opamp circuits. ESD circuit isolation is one of the unique property of the ULN2003 IC. It is a 7pin IC with bipolar arrangement of Darlington pairs with common emitters. The ULN2003 IC is mainly used in line drivers, display drivers and to drive loads which is of wider range, this IC is most commonly used in driving of motors. Hence the p-type motors are driven by the ULN2003 IC. The Darlington pairs in ULN2003 is also capable of withstanding a peak current of 600mA. The input and output pins can be reversed in the pin layout, the drivers are equipped with diodes that can be used to suppress the voltage spikes while inductuctive loads are driven. 4.2 Software Requirements 4.2.1 Raspbian OS Figure 4.8: Raspbian OS Raspbian is one of the standard Operating system that is available, which can be downloaded and installed freely. It is an unofficial port by Debian Linux, that works effectively on raspberry pi computer OS. In simple terms operating system means a set of programs with utilities that can run on the hardware, in this case it is raspberry pi. ABI refers to application binary interface. In this context, it refers to the rules that is used in order to setup registers and stack. Debian is an operating system that is available freely, which includes a set of programs and many packages that can be installed easily. Raspbian is a standard and a stable OS among the Linux community. It is very helpful for the new users that can support any practical issues that would be encountered by the users. Hence Raspbian is an ideal operating system for users to work on simple platform. 4.2.2 TKinter It is a graphical user interface (GUI) toolkit used in python which is based on object-oriented model. All the necessary objects are to be created first in order to create a GUI. The object types include buttons, text fields, entry fields and menus. Tk is developed for Tcl, a simple scripting language. It is easy to use, complete by itself and available on all operating systems are some of the advantages of Tkinter. Since it is object-oriented model it has two objects one is windows while the other is widgets. Widgets are user interface objects which Design and Implementation of a Smart Mirror as a Personal Assistant Using Raspberry Pi are arranged in hierarchical order and attached to root window. Frames act as sub windows which is also a widget. 4.2.3 Putty PuTTY an open source tool that is available for free, it is used in file transfer network application that can support many network protocols like SCP, SSH Telnet and other connections. Putty can also be connected to a serial port. Putty is a software that was formulated in Microsoft windows, and it can also be used in other operating systems. PuTTY was originally written by Simon Tatham. PuTTY provides some distinct advantages, specifically when working remotely. It is much easier to configure and it is more stable. PuTTY is very persistent in comparison to other systems, as a remote session PuTTY could be resumed back easily when the interruption has occurred. Most of the secure remote terminal is supported by PuTTY. Some of the Linux consoles that are not supported by xtrem are supported by PuTTY. Figure 4.9: PuTTY Configuration The figure 4.2.2 shows the PuTTY configuration, here the user has to first enter the host name or the IP address, in which the user wishes to connect. After entering the address the user has to mention the connection type. There are several options like Raw, Telnet Rlogin, SSH and Serial. Based on the requirement of the project SSH is chosen. If the user has previously logged in, the user can load, save or delete a session that is already stored. A help button is also there where the user can resolve the errors. 4.2.4 Real VNC Figure 4.10: VNC portal VNC Connect mainly consists of two apps, VNC Viewer and Server: VNC Server can be accessed by the user to connect to the Pi. From a PC and mobile device, where the user can watch the screen in real-time, and can control and exercise Pi. As though he is sitting in front of it. VNC viewer enables the user to connect to the pi and control the device. Figure 4.11: Raspberry Pi IP address Figure 4.2.4 shows the details that is used to configure the IP adress of the raspberry pi. Once the user gets to know the IP address, VNC viewer portal is opened and the exact IP address to which the wifi of raspberry pi is connected to is entered. And after specifying the correct user name and password of the raspberry pi, the user can launch into the essentials required by the user. 4.2.5 Dlib Figure 4.12: Dlib Dlib is one of the most standard and freely available C++ toolkit. It contains most of the machine learning algorithms that are used to create a software in C++ and to resolve the real word issues in python API. dlib tools also contains various classes, functions and detailed API listings to work on different real time applications that can be incorporated in python that makes the users tasks easy. One of the heavily influenced design software platform in c++ programming language, which is based on the software engineering is dlib. Dlib works is released under Boost Software License 4.2.6 ThingSpeak Figure 5.18: ThingSpeak. Thigspeak is associate open supply web of Things (IoT) it is a freely available tool that can be accessed by the user in various applications where the data is sent to the server by making use of the protocol named Hyper Text Transfer Protocol. Once the nurse is authenticated a signal is sent to the robotic device that is, from the PC to raspberry pi in order to access the device. 4.2.7 OpenCV Figure 4.13: OpenCV OpenCV is a widely used computer vision library, which mainly aims to provide solutions to the real time computer vision platform, by incorporating various libraries. OpenCV was initially developed by intel and was later taken over by Willow Garage and later intel aquired it. OpenCV the provides best input output image file capturing, video capturing, and loading of images. Most of the image processing features are in built. There are numerous algorithm and functions, that are very useful to the users who work on image processing platform especially in object recognition, in detection of objects, in detection of faces and most importantly in face recognition. Machine learning algorithms can be easily applied which is very usefull in developing applications, that are real time. Even though OpenCV is written in c++ its functions and bindings are adaptable to otheer languages like python, java and matlab. Various frameworks of deeplearning are also supported by OpenCV like tensorflow, pytorch, and caffe. 4.2.8 OpenFace OpenFace is a opensource freely accessable library, that is implemented mainly on development of facerecognition, with deep neural network in python and torch. Torch is a neural network that can be executed on CPU or using CUDA. OpenFace was designed by Brandon Amos, Bartosz Ludwiczuk, and Mahadev Satyanarayanan. Important features of OpenFace are: Openface detects pretrained face models from OpenCV or dlib. Openface will transform the neural network of a face using dlib based on real time estimation of the pose along with OpenCV using the repositories of dlib. Using Deeplearning neural networks the 128 encodings of the face can be represented. Using OpenFace the face encodings with larger distance, differnce between two different face encodings can be easily identified. OpenFace makes face recognition easy, by taking into consideration the classification and clustering. Using LFW Benchmark dataset in the pretrained model the accuracy level of the face recognition is improved. 4.2.9 python Figure 4.14: Python Python is a high level scripting language, which is used in general purpose programming. Python was first created by Guido van Rossum in the year 1991. The design philosophy behind python language is to emphasize the readability of the code. Python also provides dynamic features that are suitable in multiple paradigms, python has built in object orientation procedural and functional regimes many operating systems support python. Important features of python include: The coding of python is very simple compared to other languages such as ‘c’ and ‘c++’, the syntax of python can be quickly leant by programmers and it is a user friendly language. It is very similar to English language even though python is called as a “high level language”. The logic behind the code can be easily interpreted. Python is available for free and can be downloaded by anyone. One of the best feature of python is, it portable, the code written in windows can be ported to mac and in Linux platform Python language can be embedded with other languages such as ‘c’ Python has huge set of libraries which is very beneficial for users. Python language is extensively used in machine learning deep learning and in artificial intelligence. CHAPTER 5 DESIGN IMPLEMENTATION This section gives a detailed description about the implementation of Automatic Medicine Dispenser. The prototype uses Raspberry Pi kit for control of major operations of the device. Graphical User Interface is developed for authentication of the nurse. Various apps like the nurse app, the patient app and device app are also implemented. Figure 5.1: Implementation flow diagram Figure 5.1 shows the flow diagram of the implementation process to develop the prototype of Automatic medicine dispenser. The first step is creation of GUI for the authentication of the nurse. The GUI is implemented using the python library Tkinter. The necessary functions to capture the photo of nurse, the functions for date and time, program to execute face recognition are included in GUI. IR sensors are interfaced with raspberry pi for line following. Five IR sensors are mounted on Automatic dispenser, in which three sensors are used to judge the boundaries of the path, calculating the voltage difference. And the other two sensors are used to detect the path to patient 1 and patient 2. Colour combination is used in the working of sensors. When the middle sensor detects black and the other sensors detect white the device is programmed to move forward. When left sensor is detected it indicates the path to patient1 and when right sensor path to patient2 is indicated. The final step is the development of apps for communication between the patients and the nurse, implemented using MIT app development portal. 5.2 IMPLEMENTATION OF FACE RECOGNITION Face recognition is a technique that is used to identify and recognize the face of people. There are many methods in which facial recognition systems work, but in general, face recognition works by comparing selected facial features from a given image with faces present in the database of images. The following indicates the major steps that is followed in implementation of face recognition: step1: Implementation of face recognition system by installing the prerequisites such as python, OpenFace, OpenCV and dlib on Linux platform using python scripting language. Figure 5.2: Flow diagram of the Face Recognition System Figure 5.2 shows the flow diagram of face recognition system, the major steps involved in face recognition are firstly the real time images of people whose face are to be recognized are loaded into a folder which also contains the dataset of images. Once the dataset is created the relevant features in an image for face detection using HOG algorithm is found, once the face is detected, the encodings of the detected faces are matched with known face in the repository, if it matches then the face is said to be recognized. 5.2.5 FACE DETECTION The process of identification of face in an image is called as face detection HOG (Histogram of oriented algorithm) algorithm is used in face recognition, HOG is more reliable when compared to other existing algorithms 6. HOG algorithm The method counts the existence of gradient orientation in the localized portion of the image. HOG is similar to edge orientation histograms and scale-invariant feature transform. But the accuracy is improved because the computation is done on a dense grid of uniformly spaced cells. Using HOG the appearance and shape of the local object within an image can be described by the distribution of intensity gradients or edge directions. The image is divided into smaller regions that are called as cells and for every pixel in the cell a histogram of gradient direction is compiled. The descriptor is the concatenation of these histograms. Steps used to create HOG pattern of an image: Step 1: Preprocessing of an image: here an image of random size is processed to get an image of required size, in this context the original image is resized to 16×16. Step 2: Calculating the Gradient of Images: The horizontal and vertical gradient of an image is calculated, by filtering the image with the following filter kernels. -1, 0, 1 and -1, 0, 1 T Step 3: Calculating the Histogram of Gradients in 8×8 cells: the histogram of gradient is generated by computing the magnitude and direction of the gradient. Step 4: Generating a 16×16 Block Normalization: Here the image is normalized, to get an image that is not affected by lighting conditions. Step 5: Calculating the HOG feature vector: The HOG features vector of an image is generated using the HOG descriptors. Implementation of Face Detection Step 1: Data base creation The first step in face detection is creating the dataset of images where, real time images of people are taken and stored in the folder “Picture_ of _known people”. Figure 5.13: Data base of known images Figure 5.10 shows the image database of known people that is created. That is the images of person whose face has to be recognized is stored in “Picture_ of _known people” folder. A folder is created which contains the images of faces to be recognized which is saved in the open face repository using the command which is shown below. Mkdir training1-images/picture _ of _ known _ people/ A subfolder is created which contains the images of people whose face is to be recognized with the name of each person, using the following command Mkdir ./training-images/picture _ of _ known _ people /Dhanu.jpg/ Mkdir ./training-images/ picture _ of _ known _ people /divya.jpg/ Mkdir ./training-images/ picture _ of _ known _ people /chandrakanth.jpg/ The images in the subfolder is encoded using HOG algorithm and a simplified version of the image is created which resembles the generic HOG encoding of a face which is used in face detection. Step 2 Creating HOG gradient of an image: The original image is firsrt resized. The histogram of gradients is calculated by filtering the image using sobel operator present in OpenCV. The magnitude and direction of the gradients are calculated using the below formula. g = gx2 + gy2 ?=arctangy/gxIn OpenCV, the gradients can be calculated, using the function “cartToPolar” Figure 5.14: HOG pattern vector HOG representation of known face is created as shown in figure 5.12. Estimating 68 measurement in a face using face landmark estimation algorithm Figure 5.15: Land mark estimation of face Step 3: Encoding of Faces The best method to recognize the face is by comparing a known face with an unknown face so an encoded measurements of face is created after face detection, these encoded measurements of detected faces are matched with the encodings of the known peoples face stored in the folder. Using OpenFace. Figure 5.16: Encoded measurements of face Figure 5.10 shows the encoded measurement of face which is generated, OpenFace provides a lua script that will generate encodings of all images in a folder and write them to a csv file using the command ./batch1-represent./.main.lua.-outDir./.generated-embeddings./-dat./aligned-images./ Step 4: Finding the person’s name from the encoding Based on these face encodings the known persons face is recognized using the SVM classifier to match the exact face encodings with the right face. Command to Run the python code in order to check the results of face Recognition. ./Face_recognition.py Step 4: finally the face of the nurse is recognized Figure 5.17: Result of the Face Recognition System The figure 5.10 shows the face recognition implemented result. Here the person whose images are stored in the face recognition folder are identified, and the other person’s image whose images are not stored in the specific folder are identified as unknown. 5.3 GUI creation for authentication A standard tool kit that is used in the python for GUI creation is “TKinter”. The main advantage of choosing Tkinter is the vast availability of the resources and easy get to start for users that are human readable and understandable. It has a large number of user’s community to help, if stuck with errors. Tkinter module is downloaded for python version in order to create a GUI for authentication. The labeled text box widgets are designed as shown in figure 6.1 where the nurse has to enter her name and password. The necessary modules to run face recognition, password check, are also imported. 5.4 Creation of Apps The various apps such as the app for patient, app for nurse and the device app are created based on the requirements, the program which is required for each of the app is formulated using the if else, while loop and do-while loop in the MIT app portal which is created in the Designer window. MIT stands for “Massachusetts Institute of Technology”. The advantages of MIT app: MIT App inventor can be operated on android-smartphones and tablets Can be easily created if the logic is known. The MIT Portal runs on the browser hence, installing it on pc is not required but only email credentials are required in order to run the portal. No need of coding: a GUI is built which contains the logical blocks and component blocks so assembling multiple blocks together according to the logic is required. 5.4.1 Necessary steps to be followed in creating the app Step 1: Enter into the MIT app inventor portal. Step 2: Sign in to the mail ID in order to create app Step 4: To Create a new project click on file and start a new project. Step 5: Start designing the app. Figure 5.19: Implementation of the patient app The figure 5.4 shows the implementation of the patient app. Here the app is designed taking into consideration various aspects like, choosing of the prescribed designer blocks, based on the logic and the requirement of the user. Different credentials like the patients number, the message that has to be sent to the nurse, the emergency button are designed, using the options provided in the block window. Figure 5.20: Implementation of the nurse app Figure 5.5 shows the implementation of the nurse app. Based on the features logic blocks are added, a block to change the colour indication in the nurse’s app when the patient sends an SMS to the nurse’s app, and once the medicine is taken by the patient, even that is highlighted by a different colour. Text boxes are also created to display messages received from the patient. Figure 5.21: Screen shot which shows the implementation of the Device app Figure 5.6 shows the device app which has built in features based on the selected logic. The device app contains features which consists of designed text boxes, in order display messages indicating, that “medicines are arrived take medicines” message to the patient app, and voice play which includes “medicine has arrived please take your medicines” which is an intimation to the patient to take medicines, and after the robot reaches its nursing station, message and audio output which indicates “reached destination” after delivery of medicines to patients is displayed. 5.5 Interfacing of Raspberry pi with IR Sensor Figure 5.22: Interfacing of Raspberry pi with IR Sensor Figure 5.7 shows the interfacing of Raspberry pi with IR sensor The IR Sensor Module has three Pins: VCC, GND and Data. Connect the VCC and GND pins of the IR Sensor to +5V and GND pins of the Raspberry Pi. An IR sensor module consists of three main parts an IR Detector, IR Transmitter and a circuit control unit, an IR LED is usually used has transmitter, a photodiode, or a photo transistor is usually used as an IR Detector. The main circuit that controls the system consists of comparator IC with all the other components that are necessarily required. Depending on the application and the requirement the IR sensors can be placed accordingly in this project the five IR LEDs are concurrently placed. The middle three LED are used to follow the path, the right and left LEDs are used to detect the path, based on the received signal, The path to the patient one or two will be detected, So once the IR LED on the right is detected a high signal will be received and it indicates the path to patient 2, when the IR led on the left is sensed then it indicates the path to patient 1, so based on the concept the line following path is designed. 5.6 Interfacing Raspberry pi with relay circuit Figure 5.23: Interfacing of Raspberry pi with IR Sensor A relay is a simple electromechanical device that consists of a coil and few electrical contacts. When the coil is energized, it acts as an electromagnet and closes a switch. If the coil is de-energized, the coil loses its magnetic nature and releases the switch. So, by controlling the coil, one can control a switch, that will in turn control an electrical load. The raspberry pi is used to control the relay circuit using the driver IC. CHAPTER 6 RESULTS AND DISCUSSION This chapter shows the designed Automatic medicine dispenser, and it briefly discusses about the results obtained. 6.1 GUI to login: The result that shows the designed GUI, that is created using Tkinter. Firstly the nurse has to login into the system by giving her name and password as shown in the figure 6.1. Figure 6.1: GUI to login into Face recognition 6.2 GUI result of face recognition: The GUI for authentication of the nurse is shown where the caretaker or the nurse should authenticate their face and password as shown in figure 6.2. Figure 6.2: Results of face recognition Only if the face and password is matched the nurse or the caretaking staff would be able to access the device and send the medication to the patient. The access of device from nursing station to the Automatic Medicine Dispenser is through Thigspeak. Here the nurse must also specify to which patient the medicines have to be delivered, to patient 1, patient2, or to both the patients by clicking on submit button the nurse would access the device to send medications to the prescribed patient. 6.2 Designed Automatic Medicine Dispenser: The Automatic medicine Dispenser is designed in such a way that, once the nurse gets access to the device, the medicines will be loaded into the device. The robot carries the medicines in order to dispense the medicines to the respective patient. Figure 6.3: Automatic medicine dispenser The figure 6.3 shows the “Automatic Medicine Dispenser”. Once it reaches the patient, an audio indication which intimates the patients, that the “medicine is arrived please take the medicines” is announced by the robot, and once the medicines are taken by the patient, intimation is given to the nurse by the Android app to the nurses mobile, that the patient has taken the medicines. Once the robot reaches the nursing station after delivery of medicines the robot has “reached its Base station” will be displayed on the device app. 6.2 Designed device app: Device app which is used to give indications to the patients during the time of delivery of medications, such as the image of the patient1/patient2 will be displayed with an audio indication “mooving towards patien1/patient2. Figure 6.4: Device App As shown in figure 6.4 based on the received value, the automatic medicine dispenser carries medicines to patient1/patient2 or both the patients. Also gives intimation to the patient that the medicine has arrived, and once it reaches the patient. 6.2 Designed patient app: Patients app gives notifications to the patients during the time of medication indicating “Time for medication” is displayed. And the robot also intimates the patient by announciation indicating that the patient has to take medicines. Figure 6.5: Patients App Figure 6.5 Shows the patient app with inbuilt designs. When the patient presses the button “Medicine Taken By Patient1” a notification will be sent to the nurses app that the medicines are taken by the patient1/patient2 indicated by colour, so that it is easier for the patient to rectify. And “Medicine not taken by patient1/2 which is indicated by pink colour” An emergency button is also included in the app during emergency which when clicked by the patient initiates a direct call to the nurse. So that the nurse can act to the situation immediately. 6.2 Designed Nurse App: The Nurse App gives various intimation to the nurse with colour indicators like, “red” indicates that the patient1/patient2 has not taken the medicines. “Green” indicates that the patient1/patient2 has taken the medicines as shown in figure 6.6. Figure 6.6: Nurse App The correct delivery of medicines to the patients is indicated by “yellow” else it remains “grey” in colour. The colour intimation are changed based on the message delivery to the nurse from the patient app. When the patient indicates that the medicine is not taken by him/her the colour changes to red in the nurse app, which intimates the nurse that the patient has not taken the medicines and she resends the medicines to the patient. The communication between the patient and the nurse is by GSM. CHAPTER 7 CONCLUSION AND FUTURE SCOPE In the designed Automatic Medicine Dispenser the nurse is authenticated using face recognition system, that is successfully implemented using generic HOG (Histogram of Oriented Gradients) encoding algorithm, taking into consideration the facial features, landmarks and installing the required open source libraries like OpenCV, OpenFace and dlib using python scripting language on Linux platform. This authentication system is carried out and developed by creating a GUI (Graphical user interface) using Tkinter in Python. The “Medicine Dispenser” is automated using Raspberry Pi which is the main control unit in the developed prototype. The control of trays for dispensing of medicines to the patient, and interface of IR sensors used in line following is also controlled by Raspberry Pi. The entire app development such as the nurse app, The Patient app and the Device app are all developed using MIT app development portal for communication with the patients. Thus the system successfully dispenses correct medicines at the correct time to the patient. The major advantages of the Automatic medicine dispenser are: Negligence of the nurse is avoided Harmful effects of wrong medication due to misconception of medication is avoided. Patients can recover fast because the prescribed medicines are taken on time Patients can immediately intimate when they are in critical situation via the app. The future work is to develop a Human robot interaction with patients in Hospitals and Old age Homes to understand their needs and also provide assistance to the nurse or caretakers. And to also implement a smart robotic system which will automatically connect to nearest pharmacist so that the prescribed medicine can be loaded automatically, Instead of using the line following system an image processing frame work with artificial intelligence can be implemented in order to make the process realistic.