Document Type : Research Paper
Authors
College of Engineering, Abu Dhabi University, Abu Dhabi, UAE.
Abstract
Keywords
Introduction
Falls are a serious concern among the elderly people, as their physical conditions are often not able to handle injuries. Approximately 684,000 people die from falls every year and most of them are from elderly people aged 60 and above. As medical technologies advance every year, more and more elderly people are able to live longer life, but this comes at the cost of weakening physical condition. It is expected that the global elderly population will rise to a total of 1.5 billion by 2050 (Kasai, T. 2021) This increase creates more opportunities for falls to occur, leading to psychological, physical and financial difficulties for the elderly people and their families. In addition, this puts a strain on medical infrastructure and resources. With this in mind, there is a need for a new research to be conducted on finding ways to reduce this problem. One area which shows promise is the use of Machine Learning (ML) to expect fall detection. Machine Learning is a rapidly growing field, and it has many applications in various fields such as finance, technology and medicine. It is estimated that the market for Artificial Intelligence (AI) and Machine Learning in the healthcare field will rise to 28 billion U.S. dollars by the year 2025 (C. Stewart, 2020). When it comes to fall detection, machine learning systems are often able to detect falls much better and efficiently than a human can, given the same input data. This is considered an emerging field of research, and shows potential for a completely elderly care transformation.
The aim of this paper is to conduct a survey on of the most common machine learning algorithms implemented for early fall detection of elderly people and their characteristics. The different types of fall detection systems, algorithms, tools, datasets, applications, and challenges in the field will be discussed. This paper is organized as follows: section 2 lists down the different types of falls, while section 3 presents the different types of current systems used to detect falls. Section 4 shows the research methodology applied in this paper. Moreover, section 5 examines the literature review and previous studies, and section 6 summarizes the most common machine learning algorithms. Furthermore, section 7 presents the tools, technologies and the common datasets applied in the study of fall detection, while the main challenges in fall detection systems are covered in section 8. Finally, section 9 concludes the paper.
Type of Falls
The process of falling can vary in multiple aspects, which will be detailed in this section. Figure1, shows a visual summary of these types. Falling can be classified based on the following categories as listed below.
Direction
Horizontal: Whether the person fall forwards or back- wards.
Lateral: Whether the person fall towards the left or right.
Vertical: Whether the person fall directly downwards.
Speed/Acceleration
High Speed: A quick and sudden fall which occurs at high speed.
Low Speed: A slow or steady fall at low speed.
Impact/Force
High Impact: The person hit the ground with large im pact or force.
Low Impact: The person hit the ground with low impact or force.
Time
Fast Fall: A quick and sudden fall which happens in a short amount of time.
Slow Fall: The person hits the ground after a longer time period.
Follow-Up
Stayed Down: The person remained on the ground and was not able to get back up himself.
Got Back Up: The person was able to get back up by himself.
Figure 1. Types of Falls
Types of Fall Detection Systems
Fall detection devices and systems applying machine learning vary in many different aspects and can also have multiple categories for classification as show in Figure 2. There are four broad types of systems which have been utilized, and summarized:
Wearable: wearable machine learning systems use devices equipped with sensors placed on the human body to detect falls. The standard approach is to measure changes in speed and/or acceleration using the sensors in order to make a decision on whether a fall has occurred. Other quantities may be body orientation, angle, or position. An example of this is the SisFall (Sucerquia et al. 2017) which collects readings via an accelerometer and gyroscope which is worn on the subject’s body. Wearable methods are simple to implement and have the benefit of being able to detect falls no matter the location and position of the subject. However, they may cause issues of discomfort caused by the placement of sensors on the body. They are also sensitive to environmental noise.
Environmental: environmental fall detection applications also use sensors, but they are placed in the environment surrounding the person rather than on the body. The sensors normally detect changes in the surroundings for fall detection by measuring force, pressure, sound, or light. This approach is used for example by the by (Li et al. 2012), where a microphone array placed near the test subject is used to detect falls using audio signals. Environmental methods, in comparison to wearable approaches, eliminate the issue of patient discomfort, but are only able to detect falls in a specific location. They are also sensitive to environmental noise, which can impact the reliability of the data.
Vision: vision-based approaches use cameras to visually monitor the patient. The video frames are converted and processed to be fed into classifier which is trained in computer vision to detect the position and orientation of the person for fall detection. SDUFall (Ma, X. et al. 2014) for example uses this approach by monitoring the person using a Kinect camera. Vision based approaches require more machine processing power to process video frames and also cause issues of privacy on the patient. However, they often provide more accuracy.
Multimodal: multimodal systems use a combination of the other types to detect falls. In this way, multiple measurements from different domains are combined to give a more definite or accurate detection on falls. UP Fall Detection for example (Martínez-Villaseñor, L. et al. 2019) uses a multimodal approach for fall detection, where a combination of wearable Inertial Measurement Unit (IMU) sensors, Electroencephalography (EEG) Headset, cameras, infrared sensors are all combined to detect falls. Multimodal approaches can often increase accuracy and decrease false positives, due to having multiple types of sources for measurements, which give more accurate and definite classifications. However, they are more complex to implement since there will be more data points to process and analyze.
Figure 2. Types of Fall Detection System
Literature Review
A detailed review of previous studies had been conducted to investigate and explore that main machine meaning algorithms which had been applied for the early fall detection of the elderly people. The authors in (Martinez-Villaseñor, L., & Ponce, H. 2020). implemented a multimodal fall detection dataset using was the UP-Fall Detection Dataset. The dataset uses multiple input devices: infrared, EEG brain sensor, wearable sensors and a camera. The devices arranged in 7 different modalities or combinations. The algorithms used were Random Forest (RF), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and k-Nearest Neighbours (kNN) and the highest performing one in terms of accuracy was Random Forest (RF) when paired IMU which gave results in the range of 94%-95%, while the combination of body sensors+EEG, sensor+camera gave the highest results for specificity and F1 Score (99.6% & 70.44%, respectively). On the other hand, sensors may sometimes be embedded on a device, such as in the system implemented by (Lee, J. H. 2018). The accelerometer and gyroscope used here were the built-in sensors on a Galaxy Note 1 smartphone. Before being fed into the classifier for training, Fourier Descriptor (FD) is used and 96 features were extracted. Support Vector Machine algorithm was used for classification and the proposed method has a falling detection accuracy of 96.14%, proving that a basic and relatively accurate system can be implemented using a simple smartphone. However, a limitation of this study is that the person must always carry the smartphone and therefore, the fall detection may be interrupted when calls or notifications appear on the device. While fall detection systems may be multimodal in terms of the types of input device used, they may also be multimodal in the sense of using algorithms from different domains. In the study by (de Quadros, T., et al. 2018) a wrist-based wearable sensor was used to implement a fall detection system using comprehensive set of thresholds based and machine learning methods. For threshold-based methods, a maximum accuracy of 91.1% while machine learning methods achieved 99%.
However, the study done by (Ponce, H., et al. 2020) presented an open-source implementation for fall classification and detection systems using the public UP-Fall detection dataset. In this repository, the raw or feature dataset is split into 70% for training and 30% for testing. The training procedure is done over four machine learning models (i.e. Random Forest (RF), Multi-Layer Perceptron (MLP), Support Vector Machine (SVM), k-Nearest Neighbour (kNN). The highest accuracy was generated by Random Forest (RF) method which gave 95.09%. The limitation of this research paper is that it cannot be implemented directly for other datasets.
On the other hand, deep learning is considered one area of machine learning which may also be used for fall detection, as shown by the vision-based approach by (Espinosa, R., et al. 2020) Data from the multimodal UP-Fall Detection dataset was used as input, but only the camera modalities were used. A Convolutional Neural Network (CNN) with three convolutional layers of dimensions 128x128x64 layers was used for fall detection and classification. Sliding windows windowing approach was used to capture temporal dependencies between samples. The results showed that the proposed multi-vision based approach detects human falls achieving 95.64% in accuracy (kavya.T.S et al. 2020). Filtering algorithms work will with vision-based approaches, as shown in the work by (kavya.T.S et al. 2020). The proposed method is a combination of ground point estimation based on texture segmentation using Gabor filter and calculates the rate of change of angle. A person’s movement is tracked by using a Kalman Filter and calculates the angle between the tracked points with respect to a ground point (Kim, Y., & Bang, H. 2019).. Two public data sets, the UR Fall Dataset (URFD) and the Fall Detection Dataset (FDD) were used. From the experimental analysis, the system was able to achieve an accuracy of 90.53% with a sensitivity of 91.17% and specificity of 96%. Sometimes, standard machine learning algorithms by themselves may not give the desired results, and so they may be modifies using another one. This is shown in this work by the authors in (Xiong et al. 2018) which makes use of Standard Binary Particle Swarm Optimization (SBPSO) to solve overfitting problems in Support Vector Machine (SVM). Experiments results show that the proposed method can get higher accuracy (about 99%) compared with non-optimized Support Vector Machine (SVM), k-Nearest Neighbours (kNN), threshold-based method when dealing with the classification of ADL (Activities in Daily Life) and abnormal falls.
In this section, a discussion over the most common machine and deep learning algorithms which were applied in the literature studies and was implemented in the area of the fall detection:
Table 2 summaries the most common machine learning algorithms accuracies and finding in the research domain. From the table, we can find that (Villasenor et al. 2020) had implemented a multimodal approach which used different combinations of sensors, called modalities. The sensor were InfraRed (IR), Inertial Measurement Unit (IMU), Electroencephalography (EEG), and Camera (CAM). The modalities were: IR, IMU, IMU+EEG, IR+IMU+EEG, CAM, IR+CAM, IMU+EEG+CAM. Algorithms which were used were Support Vector Machine (SVM), Random Forest (RF), Multi Layer Perceptron (MLP), and k-Nearest Neighbors (kNN). Highest accuracy in the whole set was 95.76% using RF algorithm using the IMU modality. Support Vector Machine (SVM) got lowest accuracy for 4 out of 7 modalities, when compared the other algorithms. In all modalities, 90% and above accuracy was achieved for each approach except for CAM, which ranged from 27% to 30% accuracy and IR which ranged from 61% to 67%, CAM+IR got accuracy ranges of 60% to 65%. All 4 algorithms achieved above 90% but only as part of the modalities where this number was achieved. In the study of Lee [29], a wearable approach was used which achieved an average accuracy of 96.14% using the SVM. However, (Quadros et al. 2018) had used a wearable approach following the Threshold Based Method (TBM), Threshold Based Method with Madgwick’s Decomposition (TMB-MD), and the traditional Machine Learning Methods (MLM). The MLM approach used algorithms of K-Nearest Neighbors (kNN), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Decision Tree (DT), Support Vector Machine (SVM). The other two approaches used different combinations of input signals which were Total Velocity (TV), Vertical Velocity (VV), Total Displacement (TD), Vertical Displacement (VD). In the MLM approach, all algorithms achieved above 90% accuracy, with the highest accuracy being 99% highest with the kNN algorithm, and this was the highest result achieved in this study. In the same MLM method it was found that the lowest accuracy was achieved by the DT 95%. For the TBM-D approach, the highest accuracy was 91.1% and the lowest was 88%. For the TBM approach, the highest accuracy was 89.1% with the TA+TV signal combination and the lowest was 83.3% with TV signal. Moreover, (Villasenor et al. 2020), utilized a multimodal approach implementing the Random Forest (RF), Support Vector Machine (SVM), Multi Layer Perceptron (MLP), and K-Nearest Neighbors (KNN) algorithms. The highest accuracy was achieved by RF of 95.09%, and the lowest was SVM 91.16%. Another study conducted by (Espinosa et al. 2020), following a vision-based approach. They used standard algorithms of k-Nearest Neighbors (kNN), Multi Layer Perceptrol (MLP), Random Forest (RF), Support Vector Machine (SVM) and Convolutional Neural Networks (CNN). The highest accuracy was achieved implementing the CNN, of 95.64%, and the lowest was 27.30%. implementing kNN. CNN was also the only algorithm to achieve above 90% accuracy, as all the other algorithms scored very low accuracies between 27% to 32%. Jang et al. used a vision based approach which achieved accuracy of 90.55% average using a combination of the Gabor and Kalman Filters. (Xiong et al. 2018) implemented a wearable approach which achieved 99% accuracy using SVM-SBPSO. This is basically a Support Vector Machine algorithm which is modified using Standard Binary Particle Swarm Optimization (SBPSO).
From the table 2 we can find that most algorithms in the research papers achieved 90% accuracy or higher. The highest accuracy among all the different approaches, modalities, signal combinations and tests for all research papers was achieved by (Xiong et al. 2018) which achieved 99% using the SVM-SBPSO algorithm, and the lowest was (Espinosa et al. 2020) which achieved accuracy of 27% using kNN. Most common algorithm was SVM, which made an appearance in all papers. In general, when we compare the different approaches, multimodal ones gave higher average results than others, while environmental ones achieved lower results. This can be described by the fact that multimodal systems can combine data from different sources to give higher average results, while environmental applications have challenges of environmental noise which can often be difficult to overcome.
Table 1. Summary of Related Work
Authors |
Approach |
Algorithms/Techniques |
Datasets/Data Collection |
Results |
Villasenor et al. 2020 |
Multimodal |
SVM KNN RF MLP |
UP Fall Detection Dataset |
Accuracy: 95.76% (RF with IMU) |
Lee 2018 |
Wearable |
SVM |
Collected Own Data From Participants Aged 21 to 24. Number of Falls: 202 Number of Activities: 212 Sensors: Accelerometer and Gyroscope From A Samsung Galaxy Note 1 |
Accuracy: 96.14% |
Quadros et al. 2018 |
Wearable |
Threshold Based Method (TBM) Threshold-Based Method With Madgwick’s Decomposition (TBM-MD) KNN LDA LR DT |
Collected Own Data From 22 Volunteers, whose average age was 26.2 years Total of 792 Falls and ADLs Using Arduino UNO, Triaxial Gyroscope, Triaxial Magnetometer and Triaxial Acecelerometer |
Using kNN : |
Villasenor et al. 2020 |
Multimodal |
SVM RF MLP KNN |
UP Fall Detection Dataset |
Accuracy: 95.09% for RF |
Espinosa et al. 2020 |
Vision |
CNN |
UP Fall Detection Dataset |
Using 128 128 64 CNN Architecture: Accuracy: 95.64% |
Jang et al. 2020 |
Vision |
Gabor Filter Kalman Filter |
UR Fall Detection Dataset |
Accuracy: 90.53% |
Gao et al. 2018 |
Wearable |
SVM SVM-SBPSO KVM KNN Threshold |
Collected Own Data From 10 Participants Total of 800 Falls and ADLs Using Accelerometer |
Accuracy: 99% for SBPSO-SVM |
Tools, Techniques and Database
This section will discuss and summarize the main tools, technologies and datasets which are being recently implemented in fall detection for the elderly people.
Python
Python is an open-source and high-level, and general programming language. Python’s syntax is simple compared to others and has been known to be much easier to learn for those without a programming background. However, the main benefit of Python is that its standard library contains countless modules which make it applicable to different fields such as finance, business, education and more. In the machine learning field, there are many libraries which are used in standard practice but the three main libraries which being used in the domain of the research area are:
MATLAB
Matlab is a software environment and scientific programming language which supports simple as well as advanced mathematical features. The programming language is used to write scripts performing various actions while the environment is used to execute the scripts and see the output. It also provides a collection of tools for analysis, visualizations, debugging. It is the standard programming environment used in the fields of mathematics and science and is also widely used to implement and test machine learning algorithms.
Figure 5. Tools and Technologies used in Machine Learning
Datasets
Public datasets for fall detection make their data available for use to any future research. They create sets with thousands of rows of fall data for the purpose of training fall detection models. Datasets vary in their data collection methods and model different types of falls and activities. In addition, they differ in the types of participants they use to generate the fall data; some datasets use elderly people while some use younger test subjects. Table 2, summarizes the datasets and their characteristics.
Table 2. Summary of Fall Detection Datasets
Reference |
Dataset Name |
Type |
Input Devices |
Participants |
Falls |
Activities |
[26] |
UR |
Multimodal |
Accelerometer (1) Kinect Camera (2) |
5 People Aged 26 and up |
1. FallFrom Standing 2. Fall From Sitting on Chair |
1.General Activity |
[10] |
UP |
Multimodal |
Wearable IMU (5) EEG Headset (1) Cameras (2) Infrared (6) |
17 People Aged 18-24 |
1. Forward Falls Using Hands 2. Forward Fall Using Knees 3. Sideways Fall 4. Falling from sitting in chair 5. Backward Fall |
1. Walking 2. Standing 3. Jumping 4. Lying 5. Picking Up Object |
[9] |
SisFall |
Wearable |
Accelerometer (2) Gyroscope (1) |
38 People Mix of young and Elderly |
15 fall types, modelling very specific types combining direction, activity, time impact. |
19 Activities |
[37] |
UmaFall |
Wearable |
IMU Sensors (4) Smartphone (1) |
17 People Aged 18-55 |
1. Forwards Fall 2. Lateral Fall 3. Backwards Fall |
1. Normal Walking 2. Light Jogging 3. Body Bending 4. Hopping 5. Climbing Up Stairs 6. Climbing Down Stairs 7. Lying on Bed 8. Sitting on Chair |
[24] |
SDUFall |
Vision |
Kinect Camera (1) |
10 People Young Age |
1. Falling |
1. Lying 2. Walking 3. Sitting 4. Squatting 5. Bending |
[25] |
OCCU |
Vision |
Kinect Camera (2) |
5 People |
1. Falling North 2. Falling South 3. Falling East 4. Falling West 5. Falling North West, 6. Falling North East 7. Falling South West 8. Falling South East |
1. Picking up Object 2. Sitting on Floor 3. Lying Down 4. Performing Planks 5. Tying Shoes |
[12] |
Li et al.2020 |
Environmental |
Microphone Array (1) |
3 Stunt Actors Aged 32,30, 46 |
20 different falls modelling direction (forwards, backwards. left, right), and activities (trips, slips, fainting, etc.) |
20 different activities which a typical person would perform daily |
Methodology
This research was conducted by searching public databases of scientific journals such as IEEE Explore, ResearchGate, JSTOR, Pro-Quest, and EBSCO, Figure 3 presents a summary of the research methodology which was conducted. The following steps were done to select the research papers:
Figure 4. Research Process
Challenges in Fall Detection
Though the topic of fall detection using machine leering had passed many studies, but there are still some challenges which may be faced in the research areas such as:
Conclusion
The research conducted here analyzed multiple research articles in fall detection and machine learning applications. It was found that most machine learning algorithms had achieved an accuracy more than 90%, however this was not consistent among the research. In other words, an algorithm may achieve a high accuracy in one research work but have a lower one in others. This is because the performance of an algorithm depends on the implementation, not just on its built-in characteristics such as structure or formulas. Moreover, the type of input fall data fed into the algorithms also has an impact on the performance. Most papers used standard machine learning algorithms which are applicable in multiple research areas such as Support Vector Machine, k-Nearest Neighbors, Random Forest, etc. Uncommon algorithms such as Madgwick’s Decomposition and SBPSO are also present in the surveyed papers, but had limited peer-reviewed support. In terms of tools, the Python language and its machine learning libraries of Scikit-learn, Keras and Pandas were found to be the most frequently used. More research works are to be implemented testing other machine learning in the future.
Conflict of interest
The authors declare no potential conflict of interest regarding the publication of this work. In addition, the ethical issues including plagiarism, informed consent, misconduct, data fabrication and, or falsification, double publication and, or submission, and redundancy have been completely witnessed by the authors.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article
Brownlee, J. (2020). Machine Learning Mastery, Train-Test Split for Evaluating Machine Learning Algorithms.
Chauhan, N. S. (2019). Decision Tree Algorithm—Explained. Towards Data Science, 24.
Campos, S. M. T. (2021). Image Manipulation and Classification: An Application to Fire Detection.
Coursera (2021) What is Python used FOR? A beginner's guide,” Coursera. Available: https://www.coursera.org/articles/what-is-python-used-for-a-beginners-guide-to-using-python.
Casilari, E., Santoyo-Ramón, J. A., & Cano-García, J. M. (2017). Umafall: A multisensor dataset for the research on automatic fall detection. Procedia Computer Science, 110, 32-39.
de Quadros, T., Lazzaretti, A. E., & Schneider, F. K. (2018). A movement decomposition and machine learning-based fall detection system using wrist wearable device. IEEE Sensors Journal, 18(12), 5082-5089.
Espinosa, R., Ponce, H., Gutiérrez, S., Martínez-Villaseñor, L., Brieva, J., & Moya-Albor, E. (2020). Application of convolutional neural networks for fall detection using multiple cameras. In Challenges and Trends in Multimodal Fall Detection for Healthcare (pp. 97-120). Springer, Cham.
Garg. R. (2018) 7 types of classification algorithms, Analytics India Magazine. Available: https://analyticsindiamag.com/7-types-classification-algorithms/. [Accessed: 23-Aug-2021].
Kasai, T. (2021). Preparing for population ageing in the Western Pacific Region. The Lancet Regional Health–Western Pacific, 6.
Kavya, T. S., Jang, Y. M., Tsogtbaatar, E., & Cho, S. B. (2020). Fall detection system for elderly people using vision-based analysis. Science And Technology, 23(1), 69-83.
Kim, Y., & Bang, H. (2019). Introduction to Kalman filter and its applications. Introduction and Implementations of the Kalman Filter, F. Govaers, Ed. IntechOpen.
Kline, A., Kline, T., Abad, Z. S. H., & Lee, J. (2020). Using Item Response Theory for Explainable Machine Learning in Predicting Mortality in the Intensive Care Unit: Case-Based Approach. Journal of Medical Internet Research, 22(9), e20268.
Kwolek, B., & Kepski, M. (2014). Human fall detection on embedded platform using depth maps and wireless accelerometer. Computer methods and programs in biomedicine, 117(3), 489-501.
Lee, J. H. (2018). The Novel Fall Detection and Prevention Algorithm for Elderly People. Sensors & Transducers, 228(12), 79-83.
Li, C., Teng, G., & Zhang, Y. (2019, June). A survey of fall detection model based on wearable sensor. In 2019 12th International Conference on Human System Interaction (HSI) (pp. 181-186). IEEE.Li, Y., Ho, K. C., & Popescu, M. (2012). A microphone array system for automatic fall detection. IEEE Transactions on Biomedical Engineering, 59(5), 1291-1301.
Igual, R., Medrano, C., & Plaza, I. (2013). Challenges, issues and trends in fall detection systems. Biomedical engineering online, 12(1), 1-24.
Mendis, B. S. (2021). Identifying the ethno-nationality of English bloggers using deep learning (Doctoral dissertation).
Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez, J., & Peñafort-Asturiano, C. (2019). UP-fall detection dataset: A multimodal approach. Sensors, 19(9), 1988.
Martinez-Villaseñor, L., & Ponce, H. (2020). Design and analysis for fall detection system simplification. JoVE (Journal of Visualized Experiments), (158), e60361.
Ma, X., Wang, H., Xue, B., Zhou, M., Ji, B., & Li, Y. (2014). Depth-based human fall detection via shape features and improved extreme learning machine. IEEE journal of biomedical and health informatics, 18(6), 1915-1922.
Mozaffari, N., Rezazadeh, J., Farahbakhsh, R., Yazdani, S., & Sandrasegaran, K. (2019). Practical fall detection based on IoT technologies: A survey. Internet of things, 8, 100124.
Najmi, A. (2019). Imputation of missing product information using deep learning: A use case on the amazon product catalogue (Doctoral dissertation, Master’s thesis, TECHNISCHE UNIVERSITÄT MÜNCHEN).
Ponce, H., Martínez-Villaseñor, L., Núñez-Martínez, J., Moya-Albor, E., & Brieva, J. (2020). Open source implementation for fall classification and fall detection systems. In Challenges and Trends in Multimodal Fall Detection for Healthcare (pp. 3-29). Springer, Cham.
Ruder, S. (2017). Transfer learning-machine learning’s next frontier. Accessed: April.
Rodrigues, T. B., Salgado, D. P., Cordeiro, M. C., Osterwald, K. M., Teodiano Filho, F. B., de Lucena Jr, V. F., ... & Murray, N. (2018). Fall detection system by machine learning framework for public health. Procedia Computer Science, 141, 358-365.
Sucerquia, Lopez. j and Vargas-Bonilla. j, (2017) SisFall: A Fall and Movement Dataset. Sensors, vol. 17, p. 198.
Varalakshmi, M. I., Mahalakshmi, M. A., & Sriharini, M. P. (2020). Performance Analysis of Various Machine Learning Algorithm for Fall Detection-A Survey. In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN) (pp. 1-5). IEEE.
Wang, X., Ellul, J., & Azzopardi, G. (2020). Elderly fall detection systems: A literature survey. Frontiers in Robotics and AI, 7, 71.
Xiong, W., Ning, Y., Liang, S., Zhao, G., Ma, Y., Gao, X., & Zhu, Y. (2018). Accurate fall detection algorithm based on sbpso-svm classifier. In Proceedings of the 2018 10th International Conference on Bioinformatics and Biomedical Technology (pp. 83-86).
Zhang, Z., Conly, C., & Athitsos, V. (2014). Evaluating depth-based computer vision methods for fall detection under occlusions. In International Symposium on Visual Computing (pp. 196-207). Springer, Cham.