The Islamic University of Gaza Deanship of Research and Graduate Studies Faculty of Information Technology الجامعة اإلسالمية بغزة عمادة البحث العلمي و

الحجم: px
بدء العرض من الصّفحة:

Download "The Islamic University of Gaza Deanship of Research and Graduate Studies Faculty of Information Technology الجامعة اإلسالمية بغزة عمادة البحث العلمي و"

النسخ

1 The Islamic University of Gaza Deanship of Research and Graduate Studies Faculty of Information Technology الجامعة اإلسالمية بغزة عمادة البحث العلمي والد ارسات العليا كلية تكنولوجيا المعلومات Master of Information Technology ت ير ماج س تكنولوجيا المعلومات UAV Detection Model Using Histogram of Oriented Gradients and SVM نموذج الكتشاف الطائ ارت بدون طيار باستخدام المدرج التك ارري التجاهات االنحدار ومتجهات الدعم اللي By Mohammed Khalil Salah Abu-Jamous Supervised by Dr. Ashraf YA Maghari Assistant Prof of computer Science A thesis submitted in partial fulfilment of the requirements for the degree of Master of information technology in Islamic university of Gaza December/ 2019

2 إق ارر أنا الموقع أدناه مقدم الرسالة التي تحمل العنوان: UAV Detection Model Using Histogram of Oriented Gradients and SVM نموذج الكتشاف الطائرات بدون طيار باستخدام المدرج التكراري التجاهات االنحدار ومتجهات الدعم اآللي أقر بأن ما اشتملت عليه هذه الرسالة إنما هو نتاج جهدي الخاص باستثناء ما تمت اإلشارة إليه حيثما ورد وأن هذه الرسالة ككل أو أي جزء منها لم يقدم من قبل اآلخرين لنيل درجة أو لقب علمي أو بحثي لدى أي مؤسسة تعليمية أو بحثية أخرى. Declaration I understand the nature of plagiarism and I am aware of the University s policy on this. The work provided in this thesis, unless otherwise referenced, is the researcher's own work and has not been submitted by others elsewhere for any other degree or qualification. Student's name: Signature: Date: Mohammed Khalil Salah Abu-Jamous 15/12/2019 اسم الطالب: التوقيع: التاريخ:

3

4 Abstract Object detection is a well-known challenge in computer vision, and this challenge becomes more complicated especially when the object of interest occupies a small portion of the field of view, possibly moving within complex backgrounds, and in wide illumination varieties. Solving such a difficult problem requires a detection algorithm that can tolerate shape variations due to rotation, transportation and other geometric transformations. Also depending on color information is not feasible because of variable illuminations diversity, especially in outdoor situations, and because of colorful range of UAVs' parts. A model that uses Histogram of Oriented Gradients (HOG) is proposed to extract UAVs' features, then SVM is employed to distinguish the UAVs specific features and detect them later. To enhance the detection accuracy some other supporting techniques were utilized such as image-pyramids and non-local maximum suppression. As the problem is relatively new, no standard or de-facto standard image library or datasets for UAVs detection benchmarking exists, and the limited amount of training UAV images is one of the biggest challenges. So, part of our work was to select a dataset of UAVs' images for training and testing purposes, then update it by combining images from already exist related researches and other sources. The used training and testing datasets should be realistic and reflect the real world challenges mentioned previously such as geometric transformations, variable illumination, and others. Our Model achieved a good performance, about 0.57 F1 score counting false positives per image. The results are comparable to more modern techniques that are based on convolutional neural networks. The model can still be further improved, by utilizing other techniques such as ensemble of exemplar SVMs, or using discriminative part training techniques with latent SVM. Keywords: Object detection, Drone detection, UAV Detection, HOG, SVM. III

5 الملخص إن تمييز األشياء في الصور والفيديو هي مشكلة وتحدي معروف في مجال الرؤية بالحاسوب هذا التحدي يكون صعبا خاصة إذا كان الجسم الم ارد تمييزه ا صغير بالنسبة إلطار الصورة ويزداد التحدي إذا كان الجسم متحركا وعندما تكون حركته في صورة لها خلفيات مشهد معقدة وفيها تنوع واسع لإلضاءة. في هذا البحث تم اقت ارح نموذج لتمييز الطائ ارت بدون طيار خاصة الصغيرة منها والمعروفة باسم "درونز" وذلك باستخدام المدرج التك ارري التجاهات االنحدار HOG ومتجهات الدعم اآللي.SVM إن حل مشكلة صعبة كهذه يتطلب خوارزمية تمييز قادرة على االستجابة للتغير في شكل الهدف نتيجة الدو ارن واإلنتقال وأي تحوالت هندسية للشكل. كذلك ال يمكن االعتماد على التمييز بواسطة األلوان نتيجة التنوع في اإلضاءة خاصة في مشاهد الصور الخارجية وكذلك نتيجة تنوع ألوان الهدف فهي ليست جميعها بنفس اللون لذلك اقترحنا نموذج تمييز لهذه الطائر ات في الصور باستخدام وصف لتجميعات عددية تصف التدرجات المتجهة لحواف األشكال HOG ومن ثم تغذيتها لنظام تعلم آلي موجه هو نظام الدعم اآللي متجهات SVM ليقوم النظام بعد تدريبه باكتشاف األجسام المشابهة ولهدف زيادة الدقة في التمييز تم االستعانة بتقنيات مدمجة هي أه ارمات الصور و تقنية تحديد التمييز األعظم وإلغاء ما عداه في محيط التمييز المحلي. وحيث أن المشكلة حديثة نسبيا فال توجد قواعد بيانات قياسية لصور فحص وتدريب النموذج لذا فإن جزء من جهدنا البحثي كان اختيار قاعدة صور مناسبة ثم تحديثها من مصادر متنوعة وأبحاث ذات صلة بموضوع البحث واختيارها لتناسب هدف البحث بحيث يكون عدد الصور كافيا للتدريب واالختبار وعلى أن تكون صور حقيقية وواقعية وأن تعكس هذه الصور طبيعة المشكلة. لقد حقق نموذجنا نتائج جيدة مقارنة بطرق أعقد تعتمد على الشبكات العصبونية االلتفافية متعددة F1 score الطبقات حيث بلغ معدل مقياس مقدار 0.57 عند احتساب النتائج الصحيحة والخاطئة في كل صورة ومن الجدير ذكره أنه يمكن البناء على نتائجنا وتطوير نموذجنا باستخدام تقنيات فرق تجميع Latent exemplar SVM التعلم اآللي ensemble وباستخدام تقنية التعلم اآللي باألمثلة أو.SVM الكلمات الداللية: تمييز األشياء تمييز الطائ ارت بدون طيار تمييز الدرونز المدرج التك ارري التجاهات االنحدار متجهات الدعم اآللي. IV

6 ذ ذ )2( بسم اهلل الرمحن الرحيم ر أ اق ر ب ك و (5( ل ق )1( خ ل ق ا ل نس ان م ن ع ل ق ب اسم ر ب ك ا ل ي خ اق ر أ ل م )4( ع ذ لم ا ل نس ان م ا ل م ق )3( ا ل ي ع ذ لم ب ال ك ر م ا ل ع ل م ي صدق اهلل العظيم [العلق: 5-1 [ "Recite in the name of your Lord who created - (1) Created man from a clinging substance. (2) Recite, and your Lord is the most Generous (3) Who taught by the pen (4) Taught man that which he knew not. (5)" [Al-'Alaq: 1-5] V

7 Dedication To my parents, Khalil and Ghadah, who covered me with their love and patience and gave me all their attention since my childhood until now. To my lovely wife Safa'a who supported me doing this work by her continuous encouragement and understanding. To my kids Khalil, Mahmoud, Mariam, Hasan, and Jood who forgave my absence due to my long engagement in work and study. To the soul of my brother Hasan in heaven, who I never forget his wisdom and smile and shall not forget. To my grandmother Emtithal who lives far away from us separated by long distances at the coasts of North America. To all my ancestors who gave me courage, honesty, and life. To our motherland Palestine, and to our nation 'Ummah'. VI

8 Acknowledgment I would like to acknowledge my warmest thanks and appreciation to my supervisor Dr. Ashraf Al-Maghari, who helped me to complete this work through his guidance, experience, and advice. Special thanks to Dr. Ashraf AL-Attar, who started the supervision of this work before moving to USA. He encouraged me to work in the field of computer vision and artificial intelligence. Finally, to all who helped me to make this work published and see the light. VII

9 Table of Contents Declaration... I... II نتيجة الحكم على أطروحة الماجستير Abstract... III... IV الملخص Dedication... VI Acknowledgment... VII Table of Contents... VIII List of Tables... XII List of Figures... XIII List of Abbreviations... XV Chapter 1 Introduction Background and Context Feature Extraction Combined with Machine Learning Deep learning techniques Statement of the problem Importance Scope and Objectives Main Scope Main Objective Specific objectives Limitations Structure of thesis... 7 Chapter 2 Literature Review Background Object Detection Machine Learning Related Works VIII

10 2.2.1 Object Detection Based on Object Features Object Detection Using Deep Networks UAV Detection in Computer Vision UAV Detection Utilizing HOG Chapter 3 Methodology Phase 1: Collecting images from previous related researches and the Internet to create a training and testing datasets Phase 2: Apply preprocessing tasks to prepare our dataset to the HOG features extraction algorithms Resizing image sizes Image padding Aspect ratio manipulation Normalization Color conversion Sharpening Phase 3: Building and programing the HOG features descriptor extracting algorithm Image Pyramids: Sliding Window: HOG Descriptor: Phase 4: Training the SVM Phase 5: Evaluate and assess the model Image pyramids: Sliding window: Detection by SVM: Non-local maximum suppression: Applying detection measurement metrics: IX

11 3.6 Tools, equipment, resources, methods, models Resources Tools Methods and Models Equipment Chapter 4 Results and Evaluation Dataset Evaluation Metrics Accuracy Precision Recall F1 Score Experiments Visualization of Average HOG for Positive Training Dataset Random feature reduction 2D visualization PCA Truncated SVD feature reduction 2D visualization PCA feature reduction 3D visualization T-SNE feature reduction 2D visualization T-SNE feature reduction 3D visualization Direct T-SNE feature reduction 2D visualization HOG features vector of a sample UAV image Hard Mining Color schemes Number of orientations bins Normalization Block and Cell Sizes X

12 Detector Window and Context Negative image set Sharpening Classifier Examples of the detections Final Results Overview Chapter 5 Conclusions and Future Works Conclusions Future Works The Reference List XI

13 List of Tables Table (4.1): Our reference model hyper parameters for both HOG, and SVM Table (4.2): Evaluation results of reference model when used on test dataset for UAV detection Table (4.3): The effect of color scheme selection on our UAV detection model Table (4.4): The effect of orientations' bins number on our reference detection model Table (4.5): The effect of different normalization methods on our reference detector Table (4.6): Padding of positive images effect on the UAV detection model XII

14 List of Figures Figure (2.1): Maximum-margin hyperplane and margin for an SVM trained on two classes. Sample points that exist exactly on margins are called support vectors Figure (3.1): Our UAVs detection model methodology phases Figure (3.2): Training images padding samples Figure (3.3): A sample training image Figure (3.4): Example of an image pyramid for a sample test image Figure (4.1): A group of sample images from UAVs dataset Figure (4.2): Sample of our UAVs dataset with 16 pixels padding, each of final size 64 pixels by 64 pixels Figure (4.3): Mean HOG features visualization of positive UAV images from the training dataset per orientation bin 11 bins of our reference model Figure (4.4): Random projection of HOG features of our reference model, orange points represent the positive samples of the training dataset Figure (4.5): Truncated SVD projection of our reference model HOG features, the orange points represent the positive samples of the training dataset Figure (4.6): PCA of 3 principal components 3D projection of our reference model HOG features, the red points represent the positive samples of the training dataset. 40 Figure (4.7): t-sne 2 components of the first 50 PCA components of our reference model HOG features projected in 2D, the orange points represent the positive samples of the training dataset Figure (4.8): t-sne 3 components of the first 50 PCA components of our reference model HOG features projected in 3D, the red points represent the positive samples of the training dataset Figure (4.9): Direct t-sne 2 components of our reference model HOG features projected in 2D, the orange points represent the positive samples of the training dataset Figure (4.10): at the right a sample UAV image. At the left its HOG descriptor visualized. You can see how HOG captured the edges of the UAV like a T shape Figure (4.11): This chart shows the F1 score of our reference UAV detection model when used with various cell and block sizes. The cell size is measured in pixels while block sizes measured in cells Figure (4.12): The learning curve of our detection model, the red line is the training accuracy which is almost 99%, the green line represents the cross-validation accuracy, XIII

15 and shows an increase in the accuracy of the trained SVM as the number of training examples used for training increases Figure (4.13): UAV Detection example (a). Ground truth in green, detection in red Figure (4.14): UAV Detection example (b). Ground truth in green, detection in red Figure (4.15): UAV Detection example (c). Ground truth in green, detection in red Figure (4.16): UAV Detection example (d). Ground truth in green, detection in red Figure (4.17): UAV Detection example (e). Ground truth in green, detection in red Figure (4.18): UAV Detection example (f). A very small target have been detected, Ground truth in green, detection in red Figure (4.19): UAV Detection example (g). A large target detection, Ground truth in green, detection in red Figure (4.20): UAV Detection example (h). Ground truth in green, detection in red Figure (4.21): UAV Detection example (i). Miss detection, Ground truth in green, detection in red Figure (4.22): UAV Detection example (j). No targets and no detections Figure (4.23): UAV Detection example (k). No targets and no detections Figure (4.24): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM Figure (4.25): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM Figure (4.26): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM Figure (4.27): UAV redundant detections eliminated by NLMS, red boxes the final detections box, blue boxes are the original detections by SVM Figure (4.28): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM XIV

16 List of Abbreviations AI ANN CNN CV FPPI FPPW GPS HOG IoU KNN LBP LBPHF LTP MDP ML NLMS PCA R-CNN RBF RPN SGD SIFT SSD ST-Cubes SURF SVD SVM t-sne UAV YOLO Artificial Intelligence Artificial Neural Network Convolutional Neural Networks Computer Vision False Positive Per Image False Positive Per Window Global Positioning System Histogram of Oriented Gradients Intersection over Union K-Nearest Neighbours Local Binary Patterns Local Binary Patterns Histogram Fourier Local Ternary Patterns Markov Decision Process Machine Learning Non-Local Maximum Suppression Principal Component Analysis Region Proposals CNN Radial Basis Function Region Proposal Networks Stochastic Gradient Descent Scale Invariant Feature Transform Single Shot Detector Spatio-Temporal Cubes Speed Up Robust Features Singular Value Decomposition Support Vector Machine T-distributed Stochastic Neighbour Embedding Unmanned Air Vehicle You Only Look Once XV

17 Chapter 1 Introduction

18 Chapter 1 Introduction 1.1 Background and Context Unmanned Air Vehicles (UAVs) are becoming increasingly popular for hobbyists and recreational use. Also there is a growing interest in the commercial, industrial, and military use of UAVs, but with this surge in popularity comes increased risks on privacy, since the UAVs technology makes it easy to spy on people and private properties, such as an individual's home, competitors' factories and premises. Today the skies of the modern world are occupied not only by birds and airplanes but also by UAVs, this in turn increases the risk of a threat to public safety. Cases of UAVs disturbing the airline flight operations, leading to near collisions are becoming more and more common (Chen, Aggarwal, Choi, & Kuo, 2017). With the rapid development in the field of UAVs and technology used to construct them, the number of UAVs manufactured for military, commercial or recreational purposes increases sharply with each passing day. This situation poses crucial privacy and security threats when cameras or weapons are attached to the UAVs (Aker & Kalkan, 2017; Chen et al., 2017; Jin, Jiang, Qi, Lin, & Song, 2019; Rozantsev, Lepetit, & Fua, 2017). Therefore finding a UAVs detection, monitoring, and tracking systems that can detect and identify illegal UAVs becomes more and more important every day. Airports safety demands such systems, private properties, diplomatic offices, embassies, and protected areas need it too (Aker & Kalkan, 2017; Chen et al., 2017). However, the task of UAVs' detection and recognition is not an easy task, although the object detection is a well-studied problem, the field is still striving to new contributions to find new methods that is faster, more accurate, reliable, and can be adapted in different environments. The UAV detection problem raises some unique challenges in the object detection field such as the wide variety of UAVs' shapes and colors, the complex backgrounds and environments, the fast motion and speeds, The 3D free movement in any direction and orientation, day and night detection within various illumination and shadow situations, to name a few. All of these factors 2

19 contributes in different ways to make the unique challenge of a UAV detection (Aker & Kalkan, 2017; Chen et al., 2017; Gökçe, Üçoluk, Şahin, & Kalkan, 2015; Rozantsev et al., 2017). Detecting the position and attributes, like speed and direction, of UAVs before an undesirable event, has become very crucial. Unpredictable computer controlled movements, speed and maneuver abilities of UAVs, their resemblance to birds in appearance when observed from a distance make it challenging to detect, identify and correctly localize them (Aker & Kalkan, 2017). In order to solve this problem, one can think of various types of sensors to perceive the presence of a UAV in the environment. These may include global positioning systems, radio waves, infrared, and audible sound or ultrasound signals and Computer Vision (CV) (Ganti & Kim, 2016). Researchers tried to address this problem in different ways. Using special radar systems is one of them (Shi et al., 2018). Others have tried radio frequency detection to detect the wireless communications of the UAV and analyze these communications to recognize it (Nguyen, Ravindranatha, Nguyen, Han, & Vu, 2016). Sound analyses also have been examined using acoustic cameras and other sound analysis techniques (Busset et al., 2015; Hauzenberger & Holmberg Ohlsson, 2015; Mezei, Fiaska, & Molnár, 2015). An acoustic sensor is an array of sound sensing devices that captures sounds and analyze it to recognize the target from its sound fingerprint, and then localize it by a complex computation of time delay variations between sound receivers and the target object which produces the sounds. A Computer Vision (CV) approach depends on optical sensing, in which the images or videos of suspect targets are processed to identify the target and estimate its position within the field of view. The use of CV approaches for UAV detection have been used combined with other approaches (Ganti & Kim, 2016). Yet, an automated cost effective UAV detection system is still a vital need. However, it has been reported that most of the non-cv approaches of UAVs detection have many limitations for this problem, and suggested that CV techniques be used (Ganti & Kim, 2016; Gökçe et al., 2015). The optical sensing approach were used in this research by investigating one of the state-of-the-art CV techniques to address the UAV detection problem. 3

20 The object detection problem in CV has been tackled successfully in the automotive world and there are now commercial products ("mobileye," 2018; Rozantsev, Lepetit, & Fua, 2015) designed to sense and avoid both pedestrians and other cars. Flying object detection poses some unique challenges (Aker & Kalkan, 2017; Jin et al., 2019; Rozantsev et al., 2017). Object detection algorithms in CV can be divided into two main categories, feature extraction combined with Machine Learning (ML), and deep learning techniques. In this work the first approach is used Feature Extraction Combined with Machine Learning For image features extraction many algorithms do exist. Scale Invariant Feature Transform (SIFT) is a feature detection algorithm used to detect and describe local features in images (Panchal, Panchal, & Shah, 2013). In SIFT key points are taken as maxima/minima of the Difference of Gaussians (DoG) that occur at multiple scales of the image. SIFT advantages are invariant to uniform scaling, orientation, illumination changes. Speed Up Robust Features (SURF) is another image features descriptor uses an integer approximation of the determinant of Hessian blob detector, which can be computed with 3 integer operations using a precomputed integral image. SURF have been shown to be tolerance to variations in illumination, scale and blurring (Bay, Ess, Tuytelaars, & Van Gool, 2008). Histogram of Oriented Gradient (HOG) feature descriptor was first described by McConnell in 1986 (McConnell, 1986), but widely and successfully used by Dala & Triggs in 2005 (Dalal & Triggs, 2005), Dalal et al. focused in their work on pedestrian detection in images then in videos. HOG have been successfully used in detection of a variety of animals and vehicles (Hu, Liu, Li, & Xing, 2010; Malisiewicz, Gupta, & Efros, 2011). The concept behind HOG is that an object can be described by the distribution of its edges' directions. In HOG the image is divided to a grid of small cells, then a histogram of gradient directions in each cell is computed, after that cells' histograms are computed over blocks, where each block contains one cell or more, and blocks usually are overlapped by one or more cells, the overlapping of the blocks 4

21 means that each cell votes for histograms in multiple blocks and gives better normalization and illumination invariance. The descriptor is the concatenation of the computed histograms over the blocks. The key advantage of HOG is that it's invariant to photometric and some geometric transformations (Dalal & Triggs, 2005). In this research we propose a model to detect UAVs in images based on HOG. The extracted images' features needs a method to compare them and find similarity. Euclidean distance is one of the measures used for this purpose. Also supervised machine learning techniques have been used to accomplish the same mission faster and more accurate. Stochastic Gradient Descent (SGD) classifiers, KNeighbors classifier, linear regression, are all supervised machine learning techniques used to classify features based on similarities. The Support Vector Machine (SVM) is one of the popular machine learning algorithms that have been widely used. Dalal et al. used SVM in their work on HOG (Dalal & Triggs, 2005), and showed a successful results detecting pedestrians. The widely use of SVM in image classification and segmentation (Barghout, 2015; Chapelle, Haffner, & Vapnik, 1999; Gupta, Girshick, Arbeláez, & Malik, 2014) encouraged us to use it in this work Deep learning techniques The successful use of deep learning in the field of data science encouraged computer scientists to use them in image classification and object detection. Deep learning networks use the same concepts of Artificial Neural Networks (ANN), where multiple layers of nonlinear processing units for feature extraction and transformation are utilized. Each layer uses the output of the previous layer as its input. The use of Convolutional Neural Networks (CNN) as a feature extraction layer is common among deep learning techniques for image classification (Girshick, 2015; Girshick, Donahue, Darrell, & Malik, 2014; Ren, He, Girshick, & Sun, 2015). In CNN of deep learning the detection systems are built as an end-to-end detection systems, where the feature extraction, classification, and region localization all done by the neural networks. The Region Proposals CNN (R-CNN), Single-Shot Detector (SSD), and You Only Look Once (YOLO) are all examples of deep learning techniques used for object detection (Liu et al., 2016; Redmon & 5

22 Farhadi, 2017). The main drawbacks of deep learning methods are its need for a very large training dataset, and very long run training times (Aker & Kalkan, 2017; Chen et al., 2017; Girshick, 2015; Girshick et al., 2014; Liu et al., 2016; Redmon & Farhadi, 2017; Ren et al., 2015). 1.2 Statement of the problem UAV detection is a difficult task because of diversified and complex background in the real world environment and numerous UAV types in the market. Hence, detecting UAVs before an undesirable event, has become very crucial. In order to solve this problem, one can think of various types of sensors to perceive the presence of a UAV in the environment. These may include global positioning systems, radio waves, infrared, and audible sound or ultrasound signals and CV. However, it has been reported that most of the non-cv approaches of UAVs detection have many limitations for this problem, and suggested that CV techniques be used. To summaries a detection and tracking system to cover the gap of security, safety, and privacy risks imposed by UAVs is needed. And that is why the ability to use inexpensive sensors such as cameras for collision-avoidance purposes, and illegal UAVs entering secure perimeters will become increasingly important. Our work tries to solve this issue. 1.3 Importance 1. Cover the gap of safety risks imposed by UAVs. With the ability to use inexpensive sensors such as cameras for collision-avoidance purposes. 2. Could be used as a part of monitoring, tracking, and alerting systems for protected and private areas to enhance safety, security and privacy, and to be used in airspace management systems to avoid and reduce airplanes collisions risks. 3. Help in covering the gap of security, and privacy risks imposed by UAVs. With the ability to use inexpensive sensors such as cameras for the purpose of detecting illegal UAVs entering secure perimeters. 6

23 4. Improving the UAVs detection within complex backgrounds and environments. 5. Improve tracking systems to track UAVs especially small ones. 1.4 Scope and Objectives Main Scope To detect UAVs in still images and localize them within the field of view Main Objective To propose a model for UAVs detection in images and localize them within the image using Histogram of Oriented Gradients as a feature descriptor and SVM as a machine learning algorithm Specific objectives The specific objectives of the project are: 1. Select, and update a dataset of images and the ground truth locations of UAVs in the images. 2. Develop and implement the proposed detection model. 3. Evaluate the proposed model using appropriate measures. 1.5 Limitations 1. Dataset for training and testing will be acquired from other related works and the Internet. 2. The approach is limited to the detection problem and not the tracking of the objects of interest. 3. This work focuses only on using HOG descriptors to recognize the UAVs. 1.6 Structure of thesis This thesis is organized as follow: Chapter 1: Introduction, which describes an overview of the overall problem and briefly presents our proposed solution. 7

24 Chapter 2: Literature Review, this chapter gives essential background information related to our work filed in CV and ML, lists related works that have been studied to build the proposed model, and research efforts that have been done in the domain of UAV detection. Also shows our proposed model's approach to the problem. Chapter 3: Methodology, this chapter shows steps and details with a full explanation of our proposed method to build our detection mode tools and equipment. Chapter 4: Results and Evaluation, which describes the assessment process of the proposed approach that is used, by defining the Dataset, evaluation metrics, results comparisons, and explain the overall results. Chapter 5: Conclusions and Recommendations, which presents a conclusion of our work and discusses future works. Finally: References at the end. 8

25 Chapter 2 Literature Review

26 Chapter 2 Literature Review 2.1 Background Object Detection The detection of objects in images is the process of telling if a predefined object exists in a given image or not, and if existed specifying both the location and size of the smallest rectangle that enclose it (Aker & Kalkan, 2017). Different researches have been proposed for this purpose to detect variety of object types, and background environments, or applications Machine Learning A subset of Artificial Intelligence (AI), where algorithms and statistical models are used to perform specific tasks without explicit and detailed instructions from human programmers, but rather relay on pattern recognition and inference. ML is used in a variety of applications such as filtering, anti-virus programs, and CV (Bishop, 2006; Samuel, 1967) ML Types There are many types of ML based on their approaches, such as: Supervised ML In which a sufficient data is used to train the model by providing input data, and the desired output, after the model is trained it can be used on other input data not seen before to predict a suitable related output (Alzubi, Nayyar, & Kumar, 2018; Russell & Norvig, 2016) Unsupervised ML Here the input data provided to the model does not have any corresponding output, and the algorithm tries to find a structure in the data that can represent something such as clusters, classes, or groups, then when it's provided by a new input data that was not part of the training, it can relate it to one of the previous groups, usually such algorithms use some form of distance and similarity measurements to 10

27 compare between the elements of the groups and the new input data. Also other tasks for such models includes but not limited to are summarization (Khanum, Mahboob, Imtiaz, Ghafoor, & Sehar, 2015) Reinforcement ML Usually viewed as Markov Decision Process (MDP), applications include those when fixed ML models can't be used and there is a need for sequential changing, such as in game playing against humans and in autonomous vehicle routing (van Otterlo & Wiering, 2012) ML Models Many models for ML have been studied by computer scientists, where many of them are matured and others are still in continuous evolving and development, ML models include but not limited to: Artificial Neural Networks Inspired by the biological neural system of the brain, ANN learns from input data examples, the ANN model consist basically of a set of individual neurons, connected to each other in a network, where each single neuron one have one or more inputs, and sends output signals to one or more neurons in the next layer, each neuron computes a nonlinear function on its inputs to create an output value (Daniel, 2013). Actual implementations include many other complexities such as organizing the neurons into layers, giving computation weights to various inputs and outputs, and many others. There are many successful uses of ANN in different tasks such as CV, speech recognition, and game playing. Deep learning is a variety of ANN consists of multiple hidden layers of neural networks, this approach had proved to be successful in CV and speech recognition (Krizhevsky, Sutskever, & Hinton, 2012) Convolutional Neural Networks CNN is a subclass of deep learning, commonly used in CV. In contrary to multilayer perceptrons where each neuron in one layer is connected to all neurons in the next layer, CNN neurons are not fully connected. The name 'convolutional' 11

28 indicates that the neural network employs a mathematical operation called convolution. Convolution is a specialized kind of linear operation applied to a specific image pixel and its neighbors. Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers.(aghdam & Heravi, 2017). CNNs use relatively less pre-processing compared to other image classification algorithms. This means that CNN learns the filters and features that in traditional algorithms were hand-crafted. This independence from prior knowledge and human effort in feature design is a major advantage, but it comes with the cost of the need to a very large number of examples to train (Aker & Kalkan, 2017; Chen et al., 2017; Girshick, 2015) Decision trees Using a tree like decision making structure, where observation of inputs represented in branches, to reach a conclusion represented in the edge which is a leaf. Trees have been used in data mining and ML. classification is one of the important usages of decision trees where the features vector is represented in the branches to reach a leaf where that represent the class label. Where regression trees are another type of trees use real numbers in leaves instead of class labels (Kumar, 2013) Support Vector Machine Support vector machines SVMs, are a set of related supervised learning methods used for classification and regression. Given a set of training examples data, each marked as belonging to one of two categories, positive or negative, an SVM after training builds a model that predicts new examples' category. An SVM training algorithm is originally a non-probabilistic, binary, linear classifier. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces (Boujelbene, Mezghani, & Ellouze, 2010; Kumar, 2013). 12

29 Figure (2.1): Maximum-margin hyperplane and margin for an SVM trained on two classes. Sample points that exist exactly on margins are called support vectors (Wikipedia, 2019). SVM tries to find a line or linear hyper plane that separates the two classes of input data points by the maximum possible margin. Each point that resides exactly on the margin called a support vector. So the problem an SVM tries to solve is not just to find any line that separates the two classes, but the line that maximize the separation margin. Generally larger margins helps in better classifications. So SVM is a constrained optimization problem (Decoste & Schölkopf, 2002). The SVM margins can be soft or hard margins. If soft margin SVM is chosen, this mean simplicity is prioritized, in other words a misclassification of some data points is less likely to be penalized, in order to have a hyperplane of a wider margins. While in hard margin SVM style greater penalties are forced on misclassifications of input data pointes, thus leading to prioritize the separation between classes despite the margin size (Statnikov, Hardin, & Aliferis, 2006). SVM models can work with high dimension feature vectors which is very important for CV tasks. But one of the most advantages of SVM models is its ability to be trained using a smaller set of examples when compared to ANN and CNN, on the other hand CNN models don't need an image feature extraction algorithm to 13

30 precede their work. SVM models can be computationally intensive and take a time to be trained, and to be tuned to find the best hyper parameters. This need for computational resources and time is not an issue as most of the other ML methods needs also a lot of time and resources to be trained. But once SVM is trained the model is simple and can classify new entries very fast compared to other statistical and mathematical models such as the K-Nearest Neighbors (KNN) (Fan, Chang, Hsieh, Wang, & Lin, 2008). For best results of SVM, a tuning to selection of kernel, the kernel's parameters, soft margin parameter C, and class weights if classes are unbalanced. A common choice is a linear kernel, which is very simple and fast. The best value of C and class weights is often selected by a grid or random search with exponentially growing sequences of C. Typically, each combination of parameter choices is checked using cross validation, and the parameters with best cross-validation accuracy are picked. Alternatively, The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters (Hsu, Chang, & Lin, 2003) Overfitting In ML and statistics a model overfitting is a model analysis that corresponds too closely or exactly to a particular set of data observations' features, and thus may fail to fit new additional data or predict future observations reliably (Burnham, Anderson, & Huyvaert, 2011). Overfitting occurs when the ML model fails to generalize by extracting some of the residual features and noise. Usually overfitting can be identified by good performance of the model on the training dataset, while having bad generalization performance on the test dataset. Cross validation can be used to solve overfitting issues (Hawkins, 2004) Underfitting In ML an underfitted model is a model that cannot adequately capture the underlying structure of the data. An underfitted model is a model where some features and parameters that would appear in a good fit model are missing. Underfitting would 14

31 occur, for example, when fitting a linear model to non-linear data. Such a model will tend to have poor predictive performance in both training and testing. To solve underfitting problem more training examples can be introduced to train the model, and tune the model hyper parameters using better parameters values that fit (Burnham et al., 2011; Hawkins, 2004). 2.2 Related Works Object Detection Based on Object Features One of the first researches in this field was the Voila and Jones cascade classifier (Viola & Jones, 2001), the Voila-Jones cascade classifier was mainly introduced for facial detection, but the same approach could be used for other objects. In cascading detectors multistage of features detectors based on weak learners are used, the Voila- Jones was very successful at its time, and could be applied to low CPU devices. But its main drawbacks are its large number of false positives, and the target object position and rotation intolerance (Zhang & Zhang, 2010). Scale invariant feature transform (SIFT) is a feature detection algorithm used to detect and describe local features in images, this descriptor combines the Difference of Gaussians (DoG) interest region detector, which produces keypoints. The orientation of the data created has to be altered so that it matches with the orientation of the keypoints. After this, using this data, a histogram which is centered on the keypoints is created (Lowe, 1999), SIFT have been used in many applications including object recognition, gesture recognition, video tracking and many other applications. Its drawback is that it is mathematically complicated and computationally heavy (Laptev, Caputo, Schüldt, & Lindeberg, 2007; Se, Lowe, & Little, 2001; Sirmacek & Unsalan, 2009). Speed up robust features (SURF) first presented by Herbert Bay, et al. was partially inspired by SIFT (Bay et al., 2008), SURF is faster than SIFT, it depends on integral images to get major speed boost, and claimed to be more robust against a variety of image transformations (Panchal et al., 2013). SURF main disadvantage is the poor performance for variations in illumination (Agaian et al., 2016). Histogram of oriented gradient (HOG) have been used wildly in object detection since its successful use in pedestrian detection by Triggs & Dalal (Dalal & Triggs, 2005), The HOG counts occurrences of gradients' orientations in localized portions of 15

32 an image, that computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy. HOG advantages include its easy of computation, speed, and transformations and illumination variations tolerance Object Detection Using Deep Networks As the successful use of deep learning in the field of image classification, the object detection problem starts to be investigated using deep learning networks. Deep learning networks use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each layer uses the output of the previous layer as its input. The use of Convolutional Neural Networks (CNN) as a feature extraction layer is common among deep learning techniques (Girshick, 2015; Girshick et al., 2014; Ren et al., 2015). Region Proposals CNN (R-CNN) is one of the early used deep learning techniques for object recognition. R-CNN simply turns the object detection problem to an image classification problem. R-CNN first stages propose regions of interest in the image, then extract features using CNN layers, finally classify them. Improved versions of R-CNN that perform better do exists such as Fast and Faster R-CNN (Chen et al., 2017; Girshick, 2015). On the other hand the Single-Shot Detector (SSD) is one of the fastest performing deep learning techniques in CV uses the same CNN layers for both feature extraction and region proposal (Liu et al., 2016), You Only Look Once (YOLO) is also similar to SSD, both use the same "one-shot" detection approach but YOLO does not include the varying sized feature maps used by SSD (Redmon & Farhadi, 2017). Generally speaking the main disadvantages of deep learning approaches are its need to a huge amount of data to be trained, and the very long time needed to be trained and fine tune the neural network parameters (Aghdam & Heravi, 2017; Aker & Kalkan, 2017; Chen et al., 2017; Girshick, 2015; Krizhevsky et al., 2012), on the contrary it needs no external feature extraction algorithm which make it an end to end detection model. 16

33 2.2.3 UAV Detection in Computer Vision UAV detection and recognition using CV approaches is one of the open research topics. Rozantsev et al, in their work they build a detection and tracking CV system for UAVs and other flying objects based on Spatio-Temporal image Cubes (ST-Cubes) extracted from videos and frame sequences, their approach uses motion detection algorithms and motion stabilizing technique based on regression trees to identify the ST-Cubes of interest, with a feature detector (Rozantsev et al., 2017). Rozantsev et al work is aimed mainly on tracking, they used motion detection as a main part of their tracker. They achieved about 0.7 F1 score based on their results precision-recall graph. Others used deep learning to solve the problem of UAV detection, Aker & Kalkan proposed a deep learning approach based on the YOLOv2, they used a large artificial dataset to train their model on two classes UAVs and birds, trying to make their model able to distinguish between them and a void false positives of birds being detected as UAVs (Aker & Kalkan, 2017). Another work by Chen et al. focused also on deep learning, they used the Faster R-CNN version of the R-CNN to build a detection and tracking system for UAVs (Chen et al., 2017). While CNN solutions based on YOLO can be faster than R-CNN based models, they may not be more accurate than R-CNN. Also all deep learning approaches suffer from very long training time, and its need to a huge labeled dataset for training, which is sometimes are infeasible. Chen et al achieved 0.54 F1 score when their approach is tested on a real world dataset, and 0.9 F1 score when tested on a synthesized dataset they have created by augmentation. Those results are based on Chen et al reported precision-recall graphs for both datasets. The noticed difference between these two results brings a big question about results of any research in the UAV detection problem domain using CV based on synthetic datasets. Also Aker & Kalkan reported 0.9 F1 score for their YOLO approach when it's tested on another synthetized dataset made by their own augmentation. Those results are questionable due to using synthetized test datasets and because of the nature of the used backgrounds, where most backgrounds are for open seas and skies with no background complexities. Jin et al (Jin et al., 2019) in their work proposed an approach to detect UAVs and to estimate their poses based on keypoints detection, Relational Graph Network 17

34 (RGN), and Perspective-n-Point (PnP) algorithm. Their keypoints predictor is based on two stage R-CNN detector that detects the UAV's rotors, then the keypointes are ordered by RGN, and finally the UAV pose is estimated using PnP. In order to evaluate their work, Jin et al used a simulation dataset and a real world dataset. Unfortunately their real image dataset was comprised of only one UAV model that have been used in all the training and testing sets. According to their reported results they achieved about 0.75 F1 score in real world images tests, while they reported 0.93 F1 score on synthesized dataset. This noticed difference in tests results of real world and synthesized datasets reinforce our criticism of all reported tests results based on synthesized datasets. Also they reported that their approach fails to work properly on small target objects in the images. One last note about the results achieved by Jin et al is the high possibility of overfitting in their real world dataset tests due to training and testing on images of only one UAV model. Shinde et al (Shinde, Lima, & Das, 2019) presented an architecture to detect intruder UAVs using a group of guarding UAVs. They used simulations to implement and test their method, which is based on YOLOv2. Their tests were based on synthesized dataset of only one UAV model type as a target. Because their major interest is to localize the target UAV, they incorporate Global Positioning System (GPS) information from the guarding UAVs group into their method UAV Detection Utilizing HOG In our research we chose to use HOG as feature descriptor because of its successful use in object detection in other fields. HOG also needs simpler computations than many other object detection methods and can be computed using parallel computations to be even faster. HOG also can be implemented in embedded systems because of its simplicity. HOG usually tolerate variations in illumination which is very important in outdoor detection of objects because of the uncontrolled nature of lighting in outdoors. In contrast to deep learning methods HOG combined with SVM needs lees images to train and less time with a comparable accuracy. To the best of our knowledge, HOG has not been used in UAVs detection. 18

35 Chapter 3 Methodology

36 Chapter 3 Methodology In this chapter, the methodology used to achieve the study goal is described in details. The methodology are arranged in five phases. The following diagram Figure (3.1) shows all the phases of our methodology. Phase 1: Create a Suitable Dataset Phase 2: Preprocessing (Resizing images, Padding, Aspect ratio manipulation, Normalization Sharpening, Color scheme Phase 3: HOG Descriptor Phase 4: SVM Training Train SVM on Positive and Negative images from the training dataset Phase 5: Evaluation process Using NLMS to remove duplicate detections, Using IoU to validate the detection then using precision, recall and F1 score as evaluation metrics Figure (3.1): Our UAVs detection model methodology phases 20

37 3.1 Phase 1: Collecting images from previous related researches and the Internet to create a training and testing datasets. Datasets of images for the UAVs detection problem were searched for, attempts to reach the images datasets used by others for the same problem were made. A dataset of images that contain UAVs in different environments and a variety of backgrounds is needed. The dataset should also contain the ground truth pixels represent the correct target object to be able to compute accuracy and performance of our work. The training phase of the detection algorithm needs a sufficient number of images in the dataset. Also negative images that don't contain any target objects are needed. Further details about the dataset can be found in a separate section at the beginning of the next chapter (page 31). 3.2 Phase 2: Apply preprocessing tasks to prepare our dataset to the HOG features extraction algorithms. At this phase tools and image software packages were used to execute some preprocessing tasks such as: Resizing image sizes A square window of 64 pixels is used as a detection window, so all positive image samples were resized to this size. Also in the testing phase we resized input images to a width of 1024 pixels while preserving aspect ratio in both cases Image padding HOG descriptor is based on edges, and as edges have their meaning from their surrounding context, target edges that lay on the training images borders or near it may lack their surrounding context, thus padding those borders with extra pixels may help extracting near border edges. Three padding types were used in different experiments to see what effects padding have on results of training and testing the model. The three types used are: 1. No padding pixels from all sides padding pixels from all sides padding. 21

38 The padding step are done before any resizing operations in the training phase only. Figure (3.2) shows a sample positive training image with various padding options. Figure (3.2): Training images padding samples. (a) no padding sample image., (b) the same sample image of (a) with a 16 pixels padding in all directions before resizing to 64 pixels by 64 pixels window size, (c) the same image sample in (a) with 32 pixel padding before resizing Aspect ratio manipulation We preserved the original aspect ratio of images in all training and testing operations to reduce any degradation in performance due to geometric transformations Normalization Most of the normalization processes in our model are done by the HOG feature extractor which takes groups of overlapping cells, concatenate their gradients' histograms, to build blocks, then calculate a normalizing value for the blocks. Multiple normalization formulas are tested. The four tested normalization formulas are L1, L1- sqrt, L2, and L2-hys. By normalizing over multiple, overlapping blocks, the resulting descriptor is more robust to changes in illumination and shadowing. Furthermore, performing this type of normalization implies that each of the cells will be represented in the final feature vector multiple times but normalized by a slightly different neighboring cells which is a key point in illumination and shadowing tolerance. The following equation (3.1) describes the formula of L1-norm,, where is the non-normalized vector of histograms in the block, and k is its k-norm for k = 1, 2 and e be some small constant hopefully is negligible. 22

39 L1-norm: f = υ ( υ 1 + e) (3.1) The L1-sqrt norm is the same as L1-norm then get its square root, as shown in equation (3.2). L1-sqrt: f = υ ( υ 1 + e) (3.2) The formula of L2-norm is shown in equation (3.3) L2-norm: f = υ υ 2 + e 2 )3.3) L2-hys is similar to L2-norm, but values are clipped by limiting the maximum values of v to 0.2, and then renormalizing. Power law compression -Square root gamma compression- is applied to normalize the image before processing, this adds to improve the shadowing and illumination tolerance which increases accuracy. Tests of the effect of using square root compression and not using any gamma normalization at all, but didn't consider other forms of law power compression such as log gamma compression as it was stated in Dalal work (Dalal & Triggs, 2005) as over correction and worsens the results. To understand what the square root gamma normalization do, let p be the value of each pixel in the image, then p' would be the pixel value after applying equation (3.4) for square root gamma compression, or equation (3.5) for log gamma compression. p = p (3.4) p = log p (3.5) 23

40 3.2.5 Color conversion Gray scale images were used mainly in our training and testing, so we converted the color scheme of input images to gray scale before any feature extraction step. Except in some tests where other color schemes were used such as RGB, HLS, HSV, YUV, and YCrCB, to compare the results with those of gray scale, to find what impact on performance in terms of accuracy do other color schemes have on the detection model Sharpening A sharpening kernel was used in some of our trials in both training and testing to assess if this preprocessing step can improve the accuracy of the detection model, the idea behind this is that HOG depends on edges and UAVs usually moving in the images, that means a kind of motion blur or out of focus blurring can be existed. Hence the effect of using sharpening methods worth it to be tested as a blur reduction simple step. For this purpose a simple 3 by 3 kernel was used (Gonzalez & Woods, 2002) as shown in equation (3.6) to do a convolution process using this kernel. The images in Figure (3.3) shows a sample image before and after sharpening, the original image in (a) has a blurring effects, the UAV edges can be seen more clearly after sharpening P = P [ 1 5 1] (3.6) Figure (3.3): A sample training image. (a) Shows the sample before sharpening, (b) shows the sample after applying a sharpening kernel. 24

41 3.3 Phase 3: Building and programing the HOG features descriptor extracting algorithm. At this phase the HOG features descriptor algorithm is used to extract images features. This phase can be divided to the following sub phases: Image Pyramids: Importing images from the dataset and build an image pyramid for each image. Image pyramids is a technique used to enhance the tolerance of object size variations within the field of view when detecting objects based on shape. The image pyramid creation is done by creating multiple scales of the same image and search each image scale for the target object. The number of scaled images in the model is one of the parameters that have to be tuned to balance between accuracy and speed. A number from 5 to 8 layers is used in our pyramid implementation, based on the size of the first layer. Eight layers are used with initial images' width of 1024 pixels, and five layers with 720 pixels width images. The images in Figure (3.4) shows four layers of an image pyramid for one of the test images. Figure (3.4): Example of an image pyramid for a sample test image. 25

42 3.3.2 Sliding Window: For each image in the image pyramids, HOG feature extractor is applied to the image's layers, by applying it to parts of the image layer within a fixed size window. The window size that is used is 64 pixels by 64 pixels, while the window itself moves around all parts of the image like a scanner. The window moves in steps of 8 pixels in each step. This technique is called a sliding window. The sliding window helps in detecting the position of the object of interest. The shape and size of the sliding window is one of the parameters to tune for better accuracy of detection, so a square shape window is used since UAVs in images are usually have equal dimensions specially their length and width HOG Descriptor: Choosing the appropriate algorithm parameters depends on the available data and the goals to be achieved. HOG descriptors are made by dividing each image to a grid of cells, then computing edges' directions in each cell, and vote by calculating histogram of edge directions in each cell. Directions, which are called orientations, are grouped together in bins, where each bin represents an equal range of angels. Then groups of cells are gathered to create blocks, usually blocks will overlap, normalization takes place in each block based on cells values in each block. The HOG descriptor is the concatenation of all the histograms computed at the blocks level. In this study, the HOG implementation in the scikit-image software package (Van der Walt et al., 2014) is used. 3.4 Phase 4: Training the SVM In this study SVM (Support Vector Machines) is used as a supervised machine learning technique, first feeding the SVM with the P positive samples, and their respective HOG descriptors, then N negative samples from a negative training set, and their HOG descriptors. Typically N must be too much larger than P (Boujelbene et al., 2010; Chapelle et al., 1999; Dalal & Triggs, 2005; Hsu et al., 2003; Hussain, 2011; Statnikov et al., 2006; Zheng, Shen, Hartley, & Huang, 2010). The training set would be a large portion of our dataset, about 80% of the total number of images in our dataset, while the remaining images will be used as a validation set. A stratified 26

43 splitting of training and validation sets is used to ensure a more realistic representation of our data set in both training and validation. In this research the scikit-learn implementation of SVM (Pedregosa et al., 2011) is utilized. Liner SVM is faster than any other SVM variants, and usually have almost the same results of more complex SVM kernels such as the Radial Basis Function (RBF) (Boujelbene et al., 2010; Chapelle et al., 1999; Dalal & Triggs, 2005; Hsu et al., 2003; Hussain, 2011; Statnikov et al., 2006). The liner SVM have multiple parameters to tune for better accuracy and performance. The most important parameter is C parameter, which defines the penalty of the error term in classification. The greater the C value the more penalty and the more hard margins will be the produced model, while a small C value will generate less penalty and hence softer margins of classification model. Other parameters that can be used to tune the SVM are: Loss function such as "Hinge or Squared Hinge". To solve the dual or the primal SVM learning problem (Abe, 2009). The weight of each class to classify. The maximum number of iterations. 3.5 Phase 5: Evaluate and assess the model. At this phase, our detection model is applied on the testing dataset. Then the model is evaluated using some existing metrics such as accuracy, precision, recall, and F1 score. The detection process consists of the following steps: Image pyramids: This step allows the detection of target objects of different sizes in the test images Sliding window: This step helps to locate UAVs in the images' pyramids layers. 27

44 3.5.3 Detection by SVM: In this step the trained SVM model is used to classify the HOG features of each sliding window in the previous step, hence sliding windows are usually overlapped, this results in multiple detections of the same object that appear fully or partially in different adjacent or overlapped windows, So those windows detected as positive have to be further filtered in the next step Non-local maximum suppression: The Non-Local Maximum Suppression (NLMS) helps to reduce the number of detection boxes of the same target object, by suppressing redundant detections of the same object (Dalal & Triggs, 2005) Applying detection measurement metrics: To be able to use detection performance metrics such as accuracy, recall, precision, and F1 score, first the detection must be determined if valid or not, by evaluating the intersection area percentage of both the detected area and the ground truth. This is called the Intersection over Union (IoU) of the detected object. Usually a percentage equal to or greater than 50% for IoU can be considered a valid detection. The famous international CV competition Pascal VOC uses 50% of IoU for true positive detections (Everingham et al., 2015). 3.6 Tools, equipment, resources, methods, models Resources - Supervisor - Books, journals and conference papers. - IUG Library (Books, journals and last researches). - Published work and researches in this domain. - Internet web sites for tools, packages and other related information Tools - Python PyCharm Community Edition 1/1/

45 - OpenCV NumPy SciPy Scikit-image Scikit-learn Microsoft Excel and Word Methods and Models - HOG algorithm. - SVM algorithm Equipment - Laptop with CPU Intel Core 3.0 GHz, 8 GB RAM. - OS Ubuntu LTS 64 bit. 29

46 Chapter 4 Results and Evaluation

47 Chapter 4 Results and Evaluation In this chapter, the study experiments their results are discussed, then the results evaluation. The evaluation process have two parts: First, an assessment to our detection model performance and accuracy by tuning a variety of hyper parameters settings are done, showing what parameters values lead to the best results. Second, measures' scores obtained from our model where compared to previous works in the domain of UAV detection. 4.1 Dataset Finding that there is no dataset used as a standard benchmark for UAV and drones detection The proposed method were experimented using a dataset obtained from Yueru Chen (Chen et al., 2017), the dataset is about 4.8 GB, and includes more than 21,800 still images that were taken from different video clips as image sequences. The majority of the videos where shot by Chen and her collogues at the University of Southern California. The remaining videos rest where videos she had selected from YouTube. Figure (4.1) shows a sample group of the data set images. 31

48 Figure (4.1): A group of sample images from UAVs dataset. As most of the images where not labeled, the work on the dataset was started by labeling the ground truth of UAV appearances in the images. For this purpose a user friendly tool is created for help. The used label format is [xmin ymin xmax ymax] in a square form. The ground truth boxes are defined to be the minimum square that enclose the target object. We used square ground truth shapes to make it easier to compare to our detections as we used a squared detection window of 64 pixels width by 64 pixels height. 32

49 Two folders labeled as test folders of Chen's dataset, thus those two folders are used as a test dataset. The test set contains about 2,658 images. The remaining 19,200 images are too much for HOG and SVM to process in training, SVM can be trained using fewer examples. Also a negative image dataset is still needed. Chen et al used CNN approach, which don't need a separate set of negative images. Therefore a smaller set of the images in Chen's dataset is used for our training purposes. About 514 positive images and their horizontally flipped copies (left-right reflections) to be a total of 1082 images is used as positive training set. For a negative images dataset that contain no UAVs in it, a small JavaScript script to get images from Google Image Search (Google, 2019) is used, first about 50 categories of 21,120 images were created, later they were refined to only 7 categories that contains 2,600 images. A random approach to get 10 sample windows from each negative image is used, thus leading to about 26,000 negative samples in each training trial. For hard negative mining, the preliminary trained SVM models are used to search almost exhaustively all the negative dataset, and retrain the SVM model using all detected false positive windows to enhance the SVM model accuracy. Both training and testing datasets include a wide range of positions, translations, rotations, poses, shapes, colors, illuminations, and backgrounds that mimics a real life UAV appearance in our world. Figure (4.2): Sample of our UAVs dataset with 16 pixels padding, each of final size 64 pixels by 64 pixels. 33

50 4.2 Evaluation Metrics A verity of evaluation metrics are used, including: Accuracy Accuracy measures the percentage of true results to the total number of examined samples Precision correct. Accuracy = TP + TN TP + FP + TN + FN (4.1) Precision answers the question of how many positive predictions are really Recall Precision = TP TP + FP Recall used to measure the proportion of actual positives that are correctly classified F1 Score Recall = TP TP + FN (4.2) (4.3) This is the most important metric as it gives a balance between both recall and precision. F1 is a measure of the harmonic mean of both precision and recall as follows. F 1 = 2 Precision Recall Precision + Recall (4.4) 34

51 4.3 Experiments More than 80 experiments were made to investigate a variety of hyper parameters belonging to HOG, SVM, and preprocessing process, to understand their effect on the proposed model, and to assess the resulting models to choose the best of them in terms of performance and assessment measures. The following Table (4.1), shows our point of reference model hyper parameters, this model reference is nothing more than arbitrary chosen for study comparison purposes only. Table (4.1): Our reference model hyper parameters for both HOG, and SVM Reference Model Hyper-Parameters HOG Hyper-Parameters SVM Hyper-Parameters Color Scheme Gray SVM Kernel Linear Number of orientations bins 11 Problem solving type Dual Number of Pixels per cell 8*8 C (penalty parameter) 1 * 10-4 Number of cells per block 2*2 Negative class weight 5 Blocks Normalization L2 Positive class weight 1 Gamma Normalization Square root gamma compression Max number of Iterations 50,000 Sharpening No Negative Hard Mining True Training Images Padding 16 pixels This reference model have been trained in the first round to get the primal trained SVM model, then the model have been applied to the negative image dataset to search it for false positive detections, then retrained again on the newly hard mined negative windows to get the final SVM model, after testing this model on the test dataset the results of detecting UAVs in images was as illustrated in Table (4.2). 35

52 Table (4.2): Evaluation results of reference model when used on test dataset for UAV detection. Evaluation Results of UAV Detection Assessment measures Results Accuracy 0.44 Precision 0.48 Recall 0.71 F1 Score 0.57 To understand the nature of the HOG features that are obtained from the training dataset, and to understand the potentials of SVM to separate and predict positive UAV objects, several visualization efforts have been attempted to visualize the underlying data and information to assess the proposed approach. The next sections demonstrate the visualization efforts of the collected HOG features Visualization of Average HOG for Positive Training Dataset In order to visualize the HOG features of the training dataset and to understand what features it tries to capture from the examples, the sum of all the HOG vectors of the positive training set is done, to get its mean value, the mean of HOG blocks then visualized per bin, using a heatmap color spectrum. Figure (4.3) shows the visualization results. As can be seen from the figure, multiple bins have captured the T like shape of a flying UAV such as bins number 4, 5, 7, and 8. 36

53 Figure (4.3): Mean HOG features visualization of positive UAV images from the training dataset per orientation bin 11 bins of our reference model -. The HOG features vector of our reference model is about 2,156 features where each represent a histogram value for one bin in one of the cells in one of the blocks. This long feature vector is not easy to be visualized to humans, in order to visualize it in 2D or 3D graph some kind of feature dimensionality reduction techniques are utilized, so the HOG features in 2D and 3D graphs can be drawn, to see how much separation between positive and negative samples' features. The more separation that can be acquired in the plotted graphs, the more sure we would be that SVM can be helpful in classifying the HOG features, and predict those truly positive UAVs. Dimensionality reduction of High-dimensional datasets, while still retaining as much of the variance in the dataset as possible can be very helpful in data visualization. In the next few sections some of those feature dimensionality reduction visualizations are demonstrated. 37

54 4.3.2 Random feature reduction 2D visualization Despite the graph is randomly reduced, sometimes it can give a fast indication about how well the features are separated, our random features graph unfortunately was not that promising one, but this is not a bad indication as it's just a random one, and there is still more scientifically proved methods to reduce the features dimensionality and to visualize it. Figure (4.4) Shows our randomly HOG features visualization attempt PCA Truncated SVD feature reduction 2D visualization Principal Component Analysis (PCA) is a statistical procedure that can convert a set of features of correlated variables into a set of linearly uncorrelated variables. The new uncorrelated set is called the principal components of the original features set. In other words the new set of principal components are the basic elements of the feature set that can be used to differentiate the observations. The first principal component should has the largest possible variance among the original features set, and any subsequent principal component should have the largest possible variance to the previous principle components selected before it (Jolliffe & Cadima, 2016). The Truncated Singular Value Decomposition (SVD) is a variant of PCA. In PCA the factorization calculated on the covariance matrix of the data, but in SVD it's done on the data matrix itself (Hansen, 1987). In the following Figure (4.5) the Truncated SVD of our HOG features vector as a plot is shown, the orange points that represent the positive samples can be seen clearly clustered into 4 groups mainly in the middle of the diagram, which is a good indication that a good ML model can detect those positive samples correctly. 38

55 Figure (4.4): Random projection of HOG features of our reference model, orange points represent the positive samples of the training dataset. Figure (4.5): Truncated SVD projection of our reference model HOG features, the orange points represent the positive samples of the training dataset. 39

56 4.3.4 PCA feature reduction 3D visualization The next step in our HOG features investigation of our dataset was to visualize a PCA of 3 principal components in 3D diagram. The red points that represent our targeted positive UAV samples can be seen in Figure (4.6), they are mainly clustered in the middle of the other samples, again giving good possibilities of SVM to detect them. Figure (4.6): PCA of 3 principal components 3D projection of our reference model HOG features, the red points represent the positive samples of the training dataset T-SNE feature reduction 2D visualization The T-distributed Stochastic Neighbor Embedding (t-sne) is an algorithm for visualization developed by Laurens van der Maaten and Geoffrey Hinton (Maaten & Hinton, 2008). It is a nonlinear dimensionality reduction technique well-suited for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. Also 40

57 it tends not to crowd the points in the middle of the graph. The only drawback of t- SNE it's a very computational expensive and takes a lot of time to compute compared to other methods such as PCA, therefore to compensate a compromised solution is used, by calculating the first 50 principal components using the faster PCA algorithm, then applying t-sne on those 50 principal component features to extract a 2 component t-sne to project them. Figure (4.7) shows that the method was successful and presented the HOG features of our target object as a very well separated from other negative examples. The orange points which represent the positive examples can be seen clustered in two main groups one at the bottom and another on the middle top. Figure (4.7): t-sne 2 components of the first 50 PCA components of our reference model HOG features projected in 2D, the orange points represent the positive samples of the training dataset. 41

58 4.3.6 T-SNE feature reduction 3D visualization Here PCA is used to compute a 50 principle components, then computed a t- SNE for 3 components to project the results in a 3D diagram as seen in Figure (4.8) the red points represent our positive samples features that clearly clustered in two groups one in the upper far end and the second at the middle left. The clustering of the positive points indicates a very good separation possibilities using ML models. Figure (4.8): t-sne 3 components of the first 50 PCA components of our reference model HOG features projected in 3D, the red points represent the positive samples of the training dataset Direct T-SNE feature reduction 2D visualization Instead of indirectly using t-sne after using PCA to reduce the features dimensionality, here t-sne is utilized directly to reduce the HOG features vector to two components, then project it on the diagram as in Figure (4.9), the orange points represent the positive examples and its very clear how most of them are separated as a group in the bottom of the middle of the diagram, this separation clearly emphasize 42

59 that a ML model can indeed separate the two classes of positive and negative samples using this HOG features vector descriptor. Figure (4.9): Direct t-sne 2 components of our reference model HOG features projected in 2D, the orange points represent the positive samples of the training dataset HOG features vector of a sample UAV image The HOG divides the image into a grid of blocks, in each there is a number of bins to accumulate the histogram of gradients based on their orientations. In an attempt to visualize the HOG descriptor of a sample image from our positive training images dataset, the HOG visualization were divided to the same number of the HOG blocks, then represent each orientation bin by a unity line (vector) centered at the center of the block, then the vector is drawn with an angle equal to that represented by the orientation bin, and the color of the vector is brighter if the histogram value is high, otherwise will be darker. Figure (4.10) shows the sample UAV image and its HOG 43

60 representation visualized. See how the edges of the UAV have been emphasized and captured by the HOG descriptor. Figure (4.10): at the right a sample UAV image. At the left its HOG descriptor visualized. You can see how HOG captured the edges of the UAV like a T shape. After the investigation of the HOG features of our training data, the investigation about the effects of various HOG and SVM hyper parameters on the detection results and performance are done. Many experiments and tests were established, in the following subsections they are presented Hard Mining It was not surprising that when hard mining is used there is an enhancement in the final testing results, which is the same results indicated by many researchers who had used HOG and SVM in various object detection applications (Dalal & Triggs, 2005). For our reference model which is built as a one iteration of hard mining, the difference from its original preliminary detector is about 15% in terms of F1 score Color schemes Many color schemes were tested including HLS, HSV, YUV, YCrCb, RGB, and Gray. Table (4.3) shows the color schemes detection performance results. The HLS and HSV performed better in detection than others, but not in a significant difference, both HLS and HSV share the theory in representing colors so this is why the difference in performance between the two schemes is not significant. On the other hand the Gray scheme performed better than all other schemes, the performance improved about 4% only compared with HLS color scheme images. Colored images needs about three times the same memory size of a gray image, and the HOG features vector is triple long the same of the gray image's one, the larger the HOG vector the 44

61 more time is needed to train the SVM, and the more time is needed in detection. This is maybe a result of that there is a wide variety of UAV colors and backgrounds, making gradients of illumination the best choice to distinguish them rather than any color gradients. As the performance of Gray image HOG and SVM is more justified and feasible, therefore rest of the experiments are performed on Gray images HOG only. Table (4.3): The effect of color scheme selection on our UAV detection model Color Scheme Selection Effect on our Detection Model Performance Assessment Measures Color Schemes Gray HLS HSV YUV YCrCb RGB Accuracy Precision Recall F1 Score Number of orientations bins The number of orientation bins have a great effect on the detection performance, we found that the best number of bins for our model is around 10, while increasing the bins number more than that will not lead to better detections, and also will make the HOG descriptor vector longer which means more computations in training and detection. Number of bins exceeding 17 bins have even worst results by 9% in terms of F1 score. Number of bins less than 5 also have bad results in detection tests. The sweet pool for number of bins is between 7 and 13. Because of this in next tests HOG of 11 orientations bins is used, as it gives us good results and have a moderate HOG feature vector length. Table (4.4) shows the tests results of various detectors based on the number of ordinations bins. As can be seen 9 and 11 bins yields to the best results. These results are logical, if there is too much orientations the SVM will not be able to see the similarities between target objects and will fail to generalize, while too few 45

62 orientation bins will make all objects of different classes look the same as there is no enough information to distinguish them. Table (4.4): The effect of orientations' bins number on our reference detection model. Assessment Measures Number of Orientation Bins Accuracy Precision Recall F1 Score Normalization Gradient values in images vary due to variations in illumination and foregroundbackground contrast, as HOG is a descriptor of those gradients, so effective contrast normalization could be essential for good performance. In this work a number of different normalization schemes were evaluated. Block normalization is a process where cells are grouped within larger spatial blocks to get a local contrast normalizing in each block separately. The overlapping blocks, allows cells to be normalized multiple times each in a different surrounding neighborhood. The final HOG descriptor is then the vector of all features of the normalized cell histograms from all of the blocks in the detection window. Also a general gamma correction and normalization for the whole image not only on the block level were evaluated, for this purpose the Square Root Gamma Compression were tested. As can be seen from the tests results in Table (4.5) the L2 normalization on the block level combined with square root gamma compression on the whole image level gives the best results. While skipping the gamma correction step results in a decrease of 6% in detection performance, the use of L1 block normalization decreased the detection performance by 6% on the F1 score. 46

63 This led us to conclusion of the importance of the normalization for better detections that tolerate illumination variances. Table (4.5): The effect of different normalization methods on our reference detector. L1 L1- Square L2 L2-Hys L2-without Square root gamma compression Accuracy Precision Recall F1 Score Block and Cell Sizes Multiple combinations of cell sizes and blocks of cells were evaluated, the number of pixels per each cell, and the number of cells per each block, is one of the main parameters of HOG to be tuned for better detections. Figure (4.11) plots the F1 score in respect to cell size in pixels and block size in cells. For UAV detection, 2 2 cell blocks of 8 8 pixel cells perform best, with about 0.57 F1 score. In general, 6 and 8 pixel wide cells do best irrespective of the block size, an interesting coincidence as Dalal et al (Dalal & Triggs, 2005) also found that in general about 6 8 pixels cells worked best for their pedestrian detection, but unlike their exact results we have found that the best cell and block size combination for UAV detection purposes were 2 2 cell blocks of 8 8 pixel cells, while Dalal et al found that 3 3 cell blocks of 6 6 pixel cells worked the best for their human detection purposes, this lead us to say that the cell and block sizes have to be tuned for each application as there is no one fixed parameter setting that fit for all target objects. Also it can be seen clearly that 2 2 and 3 3blocks work best. But less than these sizes or greater the results go to the worse. This is may be due to the lack of good normalization as was discussed in the previous section, a good normalization scheme is very important for 47

64 good detections that can tolerate illumination, lighting, and shadowing effects. Cells in a block of size 1 1 have no normalization with adjacent cells, also too big blocks deteriorate the results as cells have to normalize with cells that are too far from its local neighborhood, thus losing its ability to describe its local spatial domain correctly. F1 Score F1 Score of Various Cell and Block Combinations Figure (4.11): This chart shows the F1 score of our reference UAV detection model when used with various cell and block sizes. The cell size is measured in pixels while block sizes measured in cells Detector Window and Context Our reference model has a detection window includes about 16 pixels of margin around the UAV in its original image prior to resizing it to the detection window size on all four sides. Table (4.6) shows that this border provides an amount of context that helps detection. Decreasing it from 16 to 0 pixels decreases performance by 1% in terms of F1 score even though the resolution of the UAV increased. The little difference in 48

65 results between no padding and 16 pixels of padding is due to that the main HOG features used by SVM to distinguish targets lay a little bit away from borders, mainly in the UAV's body, arms, and rotors, while the UAV's propellers when rotating creates the desired buffering context around the target object main parts edges. Again increasing the border to 32 pixels causes more loss of performance of 36% in terms of F1 score. This can tell us that a moderate padding just enough to put the target object in its environment context can be helpful in detection but increasing this padding too much will hurt the detection performance as the SVM starts to give the background more attention, which in turn introduces noise to the final detection model. Table (4.6): Padding of positive images effect on the UAV detection model. Training Positive Images Padding Assessment No padding 16 Pixels 32 Pixels Measures Accuracy Precision Recall F1 Score Negative image set At the beginning of experiments the full negative dataset of 50 categories including about 21,120 images were used, this large number of negative images made the SVM training time very long, it was more than we have planned for and it took more than 48 hours in one of the trials, and the SVM didn't even converged in other experiments, so we decided to reduce the size of the negative dataset, first by randomly selecting only a sample subset of each category. The next obstacle was even after this reduction sometimes the SVM didn't converge in training, so a decision was made to increase the max number of SVM iterations from its default 1,000 iterations to 50,000 iterations. The first results of the work was not that promising and a random guess can make a better detections than what was found. In order to figure out what was happening in the background, a sample of test images were investigated by visualizing the results, to find that the main problem was the high number of false positive 49

66 detections, as treatment for that we started to exclude all negative image categories that have similar look of those false positive detection boxes, after testing and experimenting the exclusion of each category, at the end of the day only 7 categories that contains 2,600 images were used, and more good results starts to appeal. Our conclusion about that was the very large number of negative samples that contains too much images of shapes, edges, and gradients that have close similarities to the UAV shapes have poisoned the SVM and introduced noise, thus no good generalization can be achieved. This directs the lights on the important role of choosing training datasets for both positive and negative images as a key point for a successful detection model. The second issue was the class weights in the SVM as both the negative and positive datasets are totally unbalanced. First the Scikit-learn default 'balanced' mode was used, which uses the values of class labels in training dataset to automatically adjust weights inversely proportional to class frequencies as in the following equation (4.5). This 'balanced' mode enhanced the results by 11% compared to using no class weights at all. Finally the usage of manual class weights in a logarithmic scale order was investigated, and was found that a class weight of 5 for negative images, and 1 for positive images gives the best results for our model and increased the final F1 score of our model compared by the 'balanced' mode by 26%. class weight = Sharpening Number of training samples Number of classes Number of samples in this class (4.5) As HOG depends on edges and gradients, sharpening was evaluated as a predetection step in testing, and also as a pre-training step. Sharpening in general is found to have decreased the detection performance for the reference model. The F1 score were about 4% less. But in more specific tests on only a small group of images that the target UAV have a kind of blurring, sharpening proved to be helpful in detection and increased the detection performance in terms of F1 score increase, but this is a special case not a general rule. So the conclusion is that sharpening will not be good as was expected to be a part of the final detection model. 50

67 Classifier By default a soft (C=0.01) linear SVM from Scikit-learn (Pedregosa et al., 2011) was trained, this SVM was tuned by grid searching the best fit C value on a logarithmic scale from 10 3 to 10-6, results showed that the best C value that leads to the highest F1 score is C = Figure (4.12) shows the learning curve of the reference model using C = , the diagram shows that as the number of training samples increases the validation carve converges to the training carve, indicating a good training fit without over or underfitting. Also the SVM solver was tested on both the primal and dual solver forms. The later requires less time and gave better results in the final tests. This was expected as many studies (Abe, 2009) shows that the dual SVM solver is better especially in the case of high dimensional features vectors, and our HOG is indeed a long descriptor. Figure (4.12): The learning curve of our detection model, the red line is the training accuracy which is almost 99%, the green line represents the crossvalidation accuracy, and shows an increase in the accuracy of the trained SVM as the number of training examples used for training increases. 51

68 Examples of the detections In this section a sample of the detections done by the reference detection model are illustrated, it can be seen in figures Figure (4.13) to Figure (4.20), our model detected the target UAVs, the complex backgrounds in both Figure (4.13), and Figure (4.14) didn't prevent the detector to correctly detect the UAVs, also the small target object size in Figure (4.18), and the big target object size in Figure (4.19) don't seem to be a problem for the detector, while in Figure (4.20) the tree branches triggered a false alarm maybe because its outer contour looks similar to the outer contours of an UAV. A partial occlusion with a complex background in Figure 4.21) prevented the detector from correctly detecting the UAV but although the image have a lot of trees, no false alarms have been raised. In Figure (4.22) and Figure (4.23) there is no target objects and the detector classified it as a negative sample correctly. Figure (4.13): UAV Detection example (a). Ground truth in green, detection in red. 52

69 Figure (4.14): UAV Detection example (b). Ground truth in green, detection in red. Figure (4.15): UAV Detection example (c). Ground truth in green, detection in red. 53

70 Figure (4.16): UAV Detection example (d). Ground truth in green, detection in red. Figure (4.17): UAV Detection example (e). Ground truth in green, detection in red. 54

71 Figure (4.18): UAV Detection example (f). A very small target have been detected, Ground truth in green, detection in red. Figure (4.19): UAV Detection example (g). A large target detection, Ground truth in green, detection in red. 55

72 Figure (4.20): UAV Detection example (h). Ground truth in green, detection in red. Figure (4.21): UAV Detection example (i). Miss detection, Ground truth in green, detection in red. 56

73 Figure (4.22): UAV Detection example (j). No targets and no detections. Figure (4.23): UAV Detection example (k). No targets and no detections. 57

74 The NLMS have an important role in the detection model, as can be seen in figures from Figure (4.24) to Figure (4.28), the SVM detects many frames a round each potential target, and the NLMS suppresses most of them for the favor of few, to get a final refined detections. Figure (4.24): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM. Figure (4.25): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM. 58

75 Figure (4.26): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM. Figure (4.27): UAV redundant detections eliminated by NLMS, red boxes the final detections box, blue boxes are the original detections by SVM. 59

76 Figure (4.28): UAV redundant detections eliminated by NLMS, red box the final detection box, blue boxes are the original detections by SVM Final Results Overview We compare the overall performance of our final HOG-SVM detector model with that of some other existing methods worked on the same problem. Detectors based on deep learning and CNN have been used to detect UAVs, Chen et al proposed a monitoring system for drones that have a detection module and a tracking module (Chen et al., 2017), and we used images from their dataset for both training and testing. To fairly compare our detection model, we will compare it to Chen's detection model alone, without comparing our model to their tracking system or the combined detection and tracking system, as we only build an object detection system only. The goal of drone detection is to detect and localize the drone in static images. Chen et al approach is built on the Faster-RCNN, which is one of the object detection methods for near real-time applications based on CNN and deep learning. The Faster- RCNN uses the deep CNN to classify object proposals. To achieve real time detection, the Faster-RCNN replaces the external object proposals with the Region Proposal Networks (RPNs) that share CNN feature maps with the detection network. The RPN is constructed on the top of convolutional layers. Chen et al reported their Precision- Recall Carve for the detector standalone when tested on their real world dataset, which 60

ABU DHABI EDUCATION COUNCIL Abu Dhabi Education Zone AL Mountaha Secondary School g-12 science section Mathematics Student Name:.. Section: How Long i

ABU DHABI EDUCATION COUNCIL Abu Dhabi Education Zone AL Mountaha Secondary School g-12 science section Mathematics Student Name:.. Section: How Long i ABU DHABI EDUCATION COUNCIL Abu Dhabi Education Zone AL Mountaha Secondary School g-12 science section Mathematics Student Name:.. Section: How Long is the Average Chord of a Circle?/ 2009-2010 Second

المزيد من المعلومات

جامعة جدارا Jadara University كلية: الدراسات التربوية

جامعة جدارا   Jadara University كلية: الدراسات التربوية Jadara University جامعة جدا ار College: Educational Studies كمية: الد ارسات التربوية اثر حجم العينة وأسموب اختيارها في الخصائص السيكومترية لممقاييس النفسية The Effect Of Sample Size And It's Selection

المزيد من المعلومات

Slide 1

Slide 1 Correlation and Regression اإلرتباط واإلنحدار Correlation اإلرتباط - Describes the relationship between two (X & Y) variables يوضح العالقة بين متغيرين )Y, X( - One variable is called independent (X) and

المزيد من المعلومات

Schedule Planner User Guide Target Audience: Students This tool can help you better plan your course schedule by generating a visual representation of

Schedule Planner User Guide Target Audience: Students This tool can help you better plan your course schedule by generating a visual representation of Schedule Planner User Guide Target Audience: Students This tool can help you better plan your course schedule by generating a visual representation of possible schedules with no time conflict. Getting

المزيد من المعلومات

دائرة اللغة العربية المادة المطلوبة المتحان اإلعادة للعام الدراسي : الصف: الثامن المهارة )الفهم واالستيعاب + التحليل األدبي( النحو المادة ال

دائرة اللغة العربية المادة المطلوبة المتحان اإلعادة للعام الدراسي : الصف: الثامن المهارة )الفهم واالستيعاب + التحليل األدبي( النحو المادة ال دائرة اللغة العربية المادة المطلوبة المتحان اإلعادة للعام الدراسي : الصف: الثامن 2018-2017 المهارة )الفهم واالستيعاب + التحليل األدبي( النحو المادة المطلوبة القراءة: درس احترام النظام )الجزء األول(+ درس

المزيد من المعلومات

Oligopoly

Oligopoly OLIGOPOLY JUC مالحظة : الملخص جهد شخصي الجامعة غير مسؤلة عنه, المدونة : https://somenote8.wordpress.com/ احتكار القله OLIGOPOLY بين االحتكار والمنافسة الكاملة BETWEEN MONOPOLY AND PERFECT COMPETITION

المزيد من المعلومات

Determinants

Determinants قسم الهندسة الزراعية د/ خالد ف ارن طاهر الباجورى استاذ الهندسة الز ارعية المساعد khaledelbagoury@yahoo.com Mobil: 01222430907 المقدمة ماهي المصفوفة جمع الضرب الكمي للمصفوفات ضرب منقول المصفوفة محدد المصفوفة

المزيد من المعلومات

ماجستيرالعلوم في الرياضيات يحتوي على ثالث مسارات تخصصية : الرياضيات البحتة الرياضيات التطبيقية اإلحصاء الكلية : كلية العلوم بالدمام. احلرم اجلامعي : ا

ماجستيرالعلوم في الرياضيات يحتوي على ثالث مسارات تخصصية : الرياضيات البحتة الرياضيات التطبيقية اإلحصاء الكلية : كلية العلوم بالدمام. احلرم اجلامعي : ا ماجستيرالعلوم في الرياضيات يحتوي على ثالث مسارات تخصصية : الرياضيات البحتة الرياضيات التطبيقية اإلحصاء الكلية : كلية العلوم بالدمام. احلرم اجلامعي : الدمام القسم : قسم الرياضيات املسار : العلمي و اإلداري

المزيد من المعلومات

افتتاحية العدد

افتتاحية العدد 99 حوليات آداب عني مشس اجمللد 93 )يناير مارس 1122( مجال حممد مقابلة A Study of the Term Al Rawnak in Ancient Arab Criticism Gamal Mohamed Mokabla Abstract This paper aims to study the term Al Rawnak, a

المزيد من المعلومات

AlZuhour Private School مدرسة الزهور الخاصة Term 1 Plan Subject Arabic Grade 2 Term 1 Contents ( كتاب الطالب ) الوحدة األولى :) صح تك بين يد

AlZuhour Private School مدرسة الزهور الخاصة Term 1 Plan Subject Arabic Grade 2 Term 1 Contents ( كتاب الطالب ) الوحدة األولى :) صح تك بين يد Term 1 Plan 2018-2019 Subject Arabic Grade 2 Term 1 Contents ( كتاب الطالب ) الوحدة األولى :) صح تك بين يديك( -- قصة مسعودة السلحفاة النص املعلوماتي : السلحفاة )الربط: بالعلوم( - النحو والكتابة : -االسم

المزيد من المعلومات

PowerPoint Presentation

PowerPoint Presentation API/iAPI Transmitting Challenges صعوبات وتحديات تطبيق برنامج تبادل المعلومات المسبقة للمسافرين Ali Al-athbi Qatar Civil Aviation Authority علي طالب العذبي الهيئة العامة للطيران قطر دولة المدني/ ICAO TRIP:

المزيد من المعلومات

افتتاحية العدد

افتتاحية العدد أطر املعاجلة االعالمية لسياسات الرئيس االمريكى باراك اوباما دراسة مقارنة بني قناتني اجلزيرة واحلرة 7 framing analysis حوليات آداب عني مشس - اجمللد )ابريل يونيو ( فاطمة الزهراء Framing analysis for policies

المزيد من المعلومات

Banner – Hold Information SOAHOLD

Banner – Hold Information SOAHOLD 1 Financial Aid System Documentation - eservice E-serviceخطوات التقديم لنظام المساعدات عبر ال 2 خطوات التقديم لنظام المساعدات Steps to apply for financial aid 1 Login to the portal http://my.uaeu.ac.ae

المزيد من المعلومات

R.A.K Chamber of Commerce & Industry Studies & Commercial Cooperation Directorate Economic Studies Section 5510 /50/11 غرفة تجارة وصناعة رأس الخيمة إد

R.A.K Chamber of Commerce & Industry Studies & Commercial Cooperation Directorate Economic Studies Section 5510 /50/11 غرفة تجارة وصناعة رأس الخيمة إد 5510 /50/11 أداء شركات رأس الخيمة المساهمة بسوق أبوظبي لألوراق المالية لعام 4102 بلغ عدد شركات رأس الخيمة المدرجة في سوق أبوظبي لألوراق المالية 11 شركة مساهمة من أصل 87 شركة مساهمة في السوق لعام 5512 حيث

المزيد من المعلومات

دور ا ا ا ا ى ا ب ا رس ا ر م د إ ا أ أ در ن ا - ا دان ا ذ ا ا ر أ ا

دور ا ا ا ا ى ا ب ا رس ا ر م د إ ا أ أ در ن ا - ا دان ا ذ ا ا ر أ ا دور ا ا ا ا ى ا ب ا رس ا ر م د إ ا أ أ در ن ا - ا دان ا ذ ا ا ر أ ا (١٧٠)... دور ا ا ا ا ى ا ب دور ا ا ا ا ى ا ب...( ١٧١ ) دور ا ا ا ا ى ا ب ا رس ا ر م د إ ا أ ا ذ ا ا ر أ ا أ در ن ا - ا دان ا ا ول ا اءات

المزيد من المعلومات

جملة ميالف للبحوث والدراسات ISSN : اجمللد 4 العدد / 1 الشهر والسنة Mila Univ center. Publish. Co.. The impact of electronic management to bu

جملة ميالف للبحوث والدراسات ISSN : اجمللد 4 العدد / 1 الشهر والسنة Mila Univ center. Publish. Co.. The impact of electronic management to bu جملة ميالف للبحوث والدراسات ISSN : 3223-1235 اجمللد 4 العدد / 1 الشهر والسنة Mila Univ center. Publish. Co.. The impact of electronic management to build structural capital of banks in Algeria - A case

المزيد من المعلومات

(141) Ziyara [119] of lady Fatima al-ma ssooma (s) Peace be upon Adam, the choice of peace be upon Noah, the prophet of peace be upon Ibraheem (Abraham), the friend of peace be upon Musa (Moses), the speaker

المزيد من المعلومات

Subject

Subject KG 1 Weekly I Plan -28 26th -30th April. 2015 Important Dates this week : Wednesday : Movie Day Every Thursday : Parent Meet and Greet 12:40-1:10 pm LA Letter: v,y Number: 20, 21 Core vocabulary: van,

المزيد من المعلومات

افتتاحية العدد

افتتاحية العدد اخلصائص الفنية لكتاب الرسائل يف القرن العشرين 412 حوليات آداب عني مشس - اجمللد 34 )يوليو سبتمرب )5102 Technical Characteristics of Letter Writers in the Twentieth Century Mohame d Gouda Abstract This research

المزيد من المعلومات

األستاذ عمر صمادي/ ماجستير لغة انجليزية عمان // Guided writing الكتابة الموجهة * هناك العديد من نماذج الكتابة الموجهه و سنلخصها هنا و يجب ع

األستاذ عمر صمادي/ ماجستير لغة انجليزية عمان // Guided writing الكتابة الموجهة * هناك العديد من نماذج الكتابة الموجهه و سنلخصها هنا و يجب ع Guided writing الكتابة الموجهة * هناك العديد من نماذج الكتابة الموجهه و سنلخصها هنا و يجب على الطالب ان يتبع الخطوات التي و ذلك حسب العنوان حتى نحصل على العالمة كاملة باذن هللا. How to communicate effectively

المزيد من المعلومات

Template for Program Curriculum Structure

Template for Program Curriculum Structure ACADEMIC PROGRAM CURRICULUM STRUCTURE FORM خطة البرنامج األكاديمي Submitted by مقدم من Education كلية التربية Psychological Sciences/ Educational Sciences Name of Department / Academic Unit THE ACADEMIC

المزيد من المعلومات

افتتاحية العدد

افتتاحية العدد موقف جامعة الدولة العربية من عملية السالم املصرية اإلسرائيلية - 791 حوليات آداب عني مشس - اجمللد 97 )يناير مارس 77( ثريا حامد الدمنهوري The Reaction of the League of Arab States towards the Egyptian-Israeli

المزيد من المعلومات

إيناس السيد محمد الشعراوى أستاذ مساعد قسم الحاسب كلية التربية - الجبيل المعلومات الشخصية الجنسية : مصرية تاريخ الميالد / 11 / م القسم علوم الحاس

إيناس السيد محمد الشعراوى أستاذ مساعد قسم الحاسب كلية التربية - الجبيل المعلومات الشخصية الجنسية : مصرية تاريخ الميالد / 11 / م القسم علوم الحاس إيناس السيد محمد الشعراوى أستاذ مساعد قسم الحاسب كلية التربية - الجبيل المعلومات الشخصية الجنسية : مصرية تاريخ الميالد 3 984/ / م القسم علوم الحاسب اآللى البريد الجامعي الرسمي eeelsharawy@iau.edu.sa الهاتف

المزيد من المعلومات

Certified Facility Management Professional WHO SHOULD ATTEND? As a Certified Facility Management Professional course, Muhtarif is the ideal next step

Certified Facility Management Professional WHO SHOULD ATTEND? As a Certified Facility Management Professional course, Muhtarif is the ideal next step Certified Facility Management Professional WHO SHOULD ATTEND? As a Certified Facility Management Professional course, Muhtarif is the ideal next step for all those who have completed the Ta aseesy Foundation

المزيد من المعلومات

Descriptive statistics الإحصاء الوصفي

Descriptive statistics الإحصاء الوصفي Descriptive statistics اإلحصاء الوصفي Grouped data and perform the frequency distribution تجميع البيانات وعمل التوزيع التكراري Measures of central tendency قياس النزعة المركزية Measures of dispersion (dispersion,

المزيد من المعلومات

د. ط در ءة ز ا ت ا دزة (درا ا ا ت) د. ط در را ر ا م م ا ا ا : ا ت ا ا ا م وا ا ي و إ ى ا ت ا ا ا دو إ و دة ا و أ اد ا. و ف ا ا إ وا ا ت ا دزة م ا أ ا

د. ط در ءة ز ا ت ا دزة (درا ا ا ت) د. ط در را ر ا م م ا ا ا : ا ت ا ا ا م وا ا ي و إ ى ا ت ا ا ا دو إ و دة ا و أ اد ا. و ف ا ا إ وا ا ت ا دزة م ا أ ا ءة ز ا ت ا دزة (درا ا ا ت) را ر ا م م ا ا ا : ا ت ا ا ا م وا ا ي و إ ى ا ت ا ا ا دو إ و دة ا و أ اد ا. و ف ا ا إ وا ا ت ا دزة م ا أ ا و ت وا ت ا دة أ ا ذ ا ا وا اءات ا ور ا و ن ا ءة و ا م ت ا. ا ا : ا

المزيد من المعلومات

R.A.K Chamber of Commerce & Industry Studies & Commercial Cooperation Directorate Economic Studies Section 0802 /80/80 غرفة تجارة وصناعة رأس الخيمة إد

R.A.K Chamber of Commerce & Industry Studies & Commercial Cooperation Directorate Economic Studies Section 0802 /80/80 غرفة تجارة وصناعة رأس الخيمة إد 0802 /80/80 أداء شركات رأس الخيمة المساهمة بسوق أبوظبي لألوراق المالية لعام 5102 بلغ عدد شركات رأس الخيمة المدرجة في سوق أبوظبي لألوراق المالية 01 شركة مساهمة من أصل 77 شركة مساهمة في السوق لعام 0802 حيث

المزيد من المعلومات

جامعة عني مشس حوليات آداب عني مشس اجمللد ( 45 عدد يوليو سبتمرب 2017( )دورية علمية حملمة( حماوالت التحالف الصفوي األورب

جامعة عني مشس حوليات آداب عني مشس اجمللد ( 45 عدد يوليو سبتمرب 2017(   )دورية علمية حملمة( حماوالت التحالف الصفوي األورب جامعة عني مشس حوليات آداب عني مشس اجمللد 45 عدد يوليو سبتمرب 2017 http://www.aafu.journals.ekb.eg )دورية علمية حملمة ضد الدولة العثمانية 8051 8055 سم رة عبد الرزاق عبد هللا * كلية اآلداب املستخلص مجيع

المزيد من المعلومات

Microsoft Word - Chapter 13 Adjectives.doc

Microsoft Word - Chapter 13 Adjectives.doc Chapter الصفات والا حوال Adjectives and Adverbs الصفة آلمة تصف الاسماء وموقعها في الجملة في عدة اماآن وهي :- الصفات Adjectives (adj) (n) - She is a good student. - I am going to a new university next year.

المزيد من المعلومات

MEI ARABIC 103 SYLLABUS Middle East Institute Arabic 103 Beginners III Syllabus Instructor Name: Phone: MEI Phone: (202) MEI l

MEI ARABIC 103 SYLLABUS Middle East Institute Arabic 103 Beginners III Syllabus Instructor Name:   Phone: MEI Phone: (202) MEI   l Middle East Institute Arabic 103 Beginners III Syllabus Instructr Name: E-mail: Phne: MEI Phne: (202) 785-2710 MEI Email: languages@mei.edu Purpse f the Curse T enable students t further cnslidate their

المزيد من المعلومات

Trans-Thoracic Echocardiography

Trans-Thoracic Echocardiography Trans-Thoracic Echocardiography patienteducation@aub.edu.lb Copyright 2016 American University of Beirut. All rights reserved. What is Trans-Thoracic Echocardiography? Trans-thoracic echocardiography (TTE),

المزيد من المعلومات

افتتاحية العدد

افتتاحية العدد العمليات العسكرية لدول احللفاء واحملور فوق األرض الليبية 539 5 حوليات آداب عني مشس - اجمللد 4 )إبريل يونيه 24( أدريس عبدالصادق رحيل حممود Military Operations OF Allied and Axis Countries on the Libyan

المزيد من المعلومات

Al-Furat Model Schools Dear parents, Once again, fun moments are in the sky. Its our pleasure to inform you about Alfurat school our awesome activity

Al-Furat Model Schools Dear parents, Once again, fun moments are in the sky. Its our pleasure to inform you about Alfurat school our awesome activity Al-Furat Model Schools Dear parents, Once again, fun moments are in the sky. Its our pleasure to inform you about Alfurat school our awesome activity day next Thursday, which is going to be about Practicing

المزيد من المعلومات

Everything you need to know about Abscess For more information or an Appointment Please call Ext. New Mowasat Hospita

Everything you need to know about Abscess For more information or an Appointment Please call Ext. New Mowasat Hospita Everything you need to know about Abscess For more information or an Appointment Please call 1826666 Ext. 2446 @NewMowasatHospital @NMOWASAT (965) 1 82 6666 www.newmowasat.com An abscess is a collection

المزيد من المعلومات

جملة ميالف للبحوث والدراسات ISSN : اجمللد 1 العدد / 5 جوان 3152 Mila Univ center. Publish. Co. Environmental Issues and Major Powers. belgac

جملة ميالف للبحوث والدراسات ISSN : اجمللد 1 العدد / 5 جوان 3152 Mila Univ center. Publish. Co. Environmental Issues and Major Powers. belgac جملة ميالف للبحوث والدراسات ISSN : 3223-1235 اجمللد 1 العدد / 5 جوان 3152 Mila Univ center. Publish. Co. Environmental Issues and Major Powers. belgacemi.mouloud@yahoo.com shahinazsbi@yahoo.fr Abstract:

المزيد من المعلومات

جامعة الزرقاء المتطلب السابق : مبادئ تسو ق الكل ة :االقتصاد والعلوم االدار ة. اسم المدرس :د.عبد الفتاح العزام القسم : التسو ق موعد المحاضرة : 1-12 عنو

جامعة الزرقاء المتطلب السابق : مبادئ تسو ق الكل ة :االقتصاد والعلوم االدار ة. اسم المدرس :د.عبد الفتاح العزام القسم : التسو ق موعد المحاضرة : 1-12 عنو جامعة الزرقاء المتطلب السابق : مبادئ تسو ق الكل ة :االقتصاد والعلوم االدار ة. اسم المدرس :د.عبد الفتاح العزام القسم : التسو ق موعد المحاضرة : 1-12 عنوان المقرر:التسو ق االلكترون الساعات المكتب ة :12-11

المزيد من المعلومات

Microsoft Word EA-ECCM 2.doc

Microsoft Word EA-ECCM 2.doc ال نامج الزم والما للم وع الت لفة والتقدير واعداد ال م ان ة معتمد عالم ا 18-22 March 2018 Kuwait Introduction This 5-day workshop provides the construction professional with a detailed understanding of

المزيد من المعلومات

اللغة العربية Items الدروس المطلوبة المتحان الفصل الدراسى األول 2019/2018 Primary 2 القراءة المحفوظات : كل الدروس : االناشيد + اآليات واالحاديث األسال

اللغة العربية Items الدروس المطلوبة المتحان الفصل الدراسى األول 2019/2018 Primary 2 القراءة المحفوظات : كل الدروس : االناشيد + اآليات واالحاديث األسال اللغة العربية Items الدروس المطلوبة المتحان الفصل الدراسى األول 2019/2018 Primary 2 القراءة المحفوظات : كل الدروس : االناشيد + اآليات واالحاديث األساليب: كل االساليب االمالء: من الدروس المذاكرة من الكتاب

المزيد من المعلومات

RAK Chamber of Commerce & Industry Studies & Information Directorate غرفة تجارة وصناعة رأس الخيمة إدارة الدراسات والمعلومات 1122/21/21 مليار درهم حجم

RAK Chamber of Commerce & Industry Studies & Information Directorate غرفة تجارة وصناعة رأس الخيمة إدارة الدراسات والمعلومات 1122/21/21 مليار درهم حجم 1122/21/21 مليار درهم حجم تجارة دولة اإلمارات مع الدول العربية حققت التجارة اإلجمالية للدولة مع بقية الدول العربية زيادة سنوية مقدارها %2 تقريبا حيث شكلت الواردات الجزء األكبر من هذه التجارة وتبقى الزيادة

المزيد من المعلومات

CHAPTER 5

CHAPTER 5 Q2. The figure represents the velocity of a particle as it travels along the x-axis. At what value (or values) of t is the instantaneous acceleration equal to zero? a = dv dt = slope of (v t) curve t(.

المزيد من المعلومات

Microsoft Word - C#2

Microsoft Word - C#2 الفصل الا ول مفاهيم البرمجة بواسطة الا هداف معنى البرمجة بواسطة األھداف... 5 معنى الفصيلة 5...Class ما ھي دوال البناء و دوال الھدم...6 Construction & destruction ما ھي خاصية التوريث 7...inheritance ما

المزيد من المعلومات

دائرة اللغة العربية المادة المطلوبة لالختبار المعلن للفصل الدراسي الثاني الر ابع الصف: في مادة اللغة العربية المادة المطلوبة - القراءة: قص ة

دائرة اللغة العربية المادة المطلوبة لالختبار المعلن للفصل الدراسي الثاني الر ابع الصف: في مادة اللغة العربية المادة المطلوبة - القراءة: قص ة دائرة اللغة العربية المادة المطلوبة لالختبار المعلن للفصل الدراسي الثاني الر ابع الصف: 2018-2019 في مادة اللغة العربية المادة المطلوبة - القراءة: قص ة )سقف األحالم ) كتاب الط الب الجزء الث اني: 1- قراءة

المزيد من المعلومات

PowerPoint Presentation

PowerPoint Presentation We promote visions Products Presentation عرض المنتجات Click here to download English flash video أنقر هنا لتحميل الفيديو فالش عربي Rear projected video roll up لوحة إعالنات رول اب بشاشة فيديو م 2 Rear

المزيد من المعلومات

The Islamic University of Gaza Research and Postgraduate Affairs Faculty of Information Technology Master of Information Technology الجامعة اإلسالمي ة

The Islamic University of Gaza Research and Postgraduate Affairs Faculty of Information Technology Master of Information Technology الجامعة اإلسالمي ة The Islamic University of Gaza Research and Postgraduate Affairs Faculty of Information Technology Master of Information Technology الجامعة اإلسالمي ة بغزة شئون البحث العلمي والدراسات العليا كلية تكنولوجيا

المزيد من المعلومات

Week: Oct14-Oct18, 2018 English GRADE 2D Weekly Lesson Plan and Homework Sheet TOPIC AND MATERIALS AS CLASSWORK SUNDAY 14/Oct/2018 Concept: How can we

Week: Oct14-Oct18, 2018 English GRADE 2D Weekly Lesson Plan and Homework Sheet TOPIC AND MATERIALS AS CLASSWORK SUNDAY 14/Oct/2018 Concept: How can we Week: Oct14-Oct18, 2018 GRADE 2D Weekly Lesson Plan and Homework Sheet TOPIC AND MATERIALS AS CLASSWORK SUNDAY 14/Oct/2018 Concept: How can we help each other in dangerous situations? HOMEWORK ASSIGNMENTS

المزيد من المعلومات

Do you like these sounds?

Do you like  these sounds? Good Vibrations االهتزازات الج دة االصوات LESSON 1 WHAT IS SOUND? ما هو الصوث إذا كنت سع دا وأنت تعرف ذلك صفق ب د ك 1-1 إذا إذا كنت سع دا وأنت تعرف ذلك صفق ب د ك كنت سع دا وأنت تعرف ذلك صفق ب د ك إذا كنت

المزيد من المعلومات

خطـــــة المركــــــز التدريبيـــــة خلال شهر كانون ثاني من عام 2004

خطـــــة المركــــــز التدريبيـــــة خلال شهر كانون ثاني من عام 2004 10 / 10 / 1 2 6/20 0 6 FRM-TC-T-01-01 تموز )7( التقييم العقاري Real Estate Appraisal 19/08/2019 /07/2019 Project Management - PMP Course 22/08/2019 31/07/2019 تصميم الشآت الخرسانية Concrete Structure Design

المزيد من المعلومات

مدرســــة الوحدة الخاصة

مدرســــة الوحدة الخاصة Final Exam of Second Semester Required Materials (2018-2019) Grade 1 Subject Required Material Standards اللغة العربية 2.1.1 أن يقرأ المتعلم نصوصا شعرية وأعماال نثرية ويحللها وينتج فهما جديدا للفكرة المحورية

المزيد من المعلومات

PowerPoint Presentation

PowerPoint Presentation Al Ain Municipality Investment Projects Select a language ENGLISH Al Ain Municipality Investment Projects Select a project The Asharej Walk Car Showroom & Automall Fashion District Community Centre LANGUAGE

المزيد من المعلومات

Everything you need to know about Plain adhesive fillings For more information or an Appointment Please call Ext. New

Everything you need to know about Plain adhesive fillings For more information or an Appointment Please call Ext. New Everything you need to know about Plain adhesive fillings For more information or an Appointment Please call 1826666 Ext. 2535 @NewMowasatHospital New Mowasat Hospital @NMOWASAT (965) 1 82 6666 New Mowasat

المزيد من المعلومات

Kingdom of Saudi Arabia National Commission for Academic Accreditation & Assessment اململكة العربية السعودية اهليئة الوطنية للتقويم واالعتماد األكاديم

Kingdom of Saudi Arabia National Commission for Academic Accreditation & Assessment اململكة العربية السعودية اهليئة الوطنية للتقويم واالعتماد األكاديم The Course Specifications (CS) Form 5a_Course Specifications _SSRP_1 JULY 2013 Page 1 Course Specifications Institution Date of Report: 6/11/2013 Alyamamah University College/Department: قسم اإلنسانيات

المزيد من المعلومات

السادة وگاالت اإلعالن والعمالء احملترمني املوضوع اسعار اإلعالنات لعام ابتداء من ابريل Subject Rate card 2015 starting from April تهديگم شرگة ال

السادة وگاالت اإلعالن والعمالء احملترمني املوضوع اسعار اإلعالنات لعام ابتداء من ابريل Subject Rate card 2015 starting from April تهديگم شرگة ال السادة وگاالت اإلعالن والعمالء احملترمني املوضوع اسعار اإلعالنات لعام 20 ابتداء من ابريل Subject Rate card 20 starting from April تهديگم شرگة الراي العالمية للدعاية واإلعالن اطيب التحيات ونشگرگم على دعمگم

المزيد من المعلومات

Evaluating Employee Performance

Evaluating Employee Performance JUC مالحظة : الملخص جهد شخصي EVALUATING EMPLOYEE PERFORMANCE الجامعة غير مسؤلة عنه, المدونة : https://somenote8.wordpress.com/ ترتيب الملف يابطله كالتالي : الشريحة االولى باللغة االنجليزية ( من حقك المحاولة

المزيد من المعلومات

VATP004 VAT Public Clarification Use of Exchange Rates for VAT purposes توضيح عام بشأن ضريبة القيمة المضافة استخدام أسعار الصرف لغايات ضريبة القيمة ال

VATP004 VAT Public Clarification Use of Exchange Rates for VAT purposes توضيح عام بشأن ضريبة القيمة المضافة استخدام أسعار الصرف لغايات ضريبة القيمة ال VATP004 VAT Public Clarification Use of Exchange Rates for VAT purposes توضيح عام بشأن ضريبة القيمة المضافة استخدام أسعار الصرف لغايات ضريبة القيمة المضافة Issue Article 69 of Federal Decree-Law No. (8)

المزيد من المعلومات

MEI ARABIC 201 SYLLABUS Instructor Name: Phone: MEI Phone: (202) MEI Middle East Institute Arabic 201 Interm

MEI ARABIC 201 SYLLABUS Instructor Name:   Phone: MEI Phone: (202) MEI   Middle East Institute Arabic 201 Interm Instructr Name: E-mail: Phne: MEI Phne: (202) 785-2710 MEI Email: languages@mei.edu Middle East Institute Arabic 201 Intermediate I Syllabus Curse Descriptin and gals This curse aims t bring students t

المزيد من المعلومات

Everything you need to know about Preventive Filling For more information or an Appointment Please call Ext. New Mowa

Everything you need to know about Preventive Filling For more information or an Appointment Please call Ext. New Mowa Everything you need to know about Preventive Filling For more information or an Appointment Please call 1826666 Ext. 2535 @NewMowasatHospital @NMOWASAT (965) 1 82 6666 www.newmowasat.com What is preventive

المزيد من المعلومات

الرلم التسلسل : دراسة تحليلية لواقع الرياضة المدرسية دراسة م دان ة أجر ت على ثانو ات مد نة الوادي لدى الطور الثانوي

الرلم التسلسل : دراسة تحليلية لواقع الرياضة المدرسية دراسة م دان ة أجر ت على ثانو ات مد نة الوادي لدى الطور الثانوي الرلم التسلسل : دراسة تحليلية لواقع الرياضة المدرسية دراسة م دان ة أجر ت على ثانو ات مد نة الوادي لدى الطور الثانوي 41 6142 6142 أ ب Summray: The study aims to investigate school sborts as far as physics

المزيد من المعلومات

Grade 2 Unit P.2 Electricity 1

Grade 2 Unit P.2 Electricity 1 1 المعايير المغطاة في هذه الوحدة Standards for the unit 2.9.1 Name and use some common devices that use electricity. 2.9.2 Know that common electrical devices can make light, sound, heat and movement.

المزيد من المعلومات

Qanoon player Furat Qaddouri and his group present Baghdady Maqams Baghdadi Maqam s project; introducing a new perspective of Iraqi maqam. The Iraqi M

Qanoon player Furat Qaddouri and his group present Baghdady Maqams Baghdadi Maqam s project; introducing a new perspective of Iraqi maqam. The Iraqi M Qanoon player Furat Qaddouri and his group present Baghdady Maqams Baghdadi Maqam s project; introducing a new perspective of Iraqi maqam. The Iraqi Maqam is considered to be one of the most important

المزيد من المعلومات

c1

c1 Zain Broadband Thank you for choosing Zain Broadband. With your Zain Broadband, you can explore and experience the internet world at high speed. This manual describes the indicator of Zain Broadband Modem,

المزيد من المعلومات

السيرة الذاتية

السيرة الذاتية السيرة الذاتية البيانات الشخصية االسم: عماد محمد سلومه محمود. الجنسية: ي. تاريخ ومكان الميالد: 4//5 بني سويف. الحالة االجتماعية: متزوج العنوان الحالي: : بنى سويف ارض الحرية عمارة خفاجي شقة 0.4.5 : جامعة

المزيد من المعلومات

الباب الثاني: تحليل الطلب

الباب الثاني: تحليل الطلب انثاب انثاوي: تحهيم انطهة اعداد أستاذ دكتور: ممدوح مدبولي 1 تعريف انطهة رغبة مصحوبة بقدرة شرائ ة الكم ة الت طلبها المستهلك من سلعة ما عند سعرها الحال نماذا ودرس انطهة ان نجاح أو فشل أي منشأة اقتصاد ة توقف

المزيد من المعلومات

Received: Jan 2018 Accepted: Fèv 2018 Published: Mar 2018 : Abstract: This study ai

Received: Jan 2018 Accepted: Fèv 2018 Published: Mar 2018 : Abstract: This study ai . 1 zoudaammar@yahoo.fr. hamzabkf@yahoo.fr Received: Jan 2018 Accepted: Fèv 2018 Published: Mar 2018 : Abstract: This study aims to investigate on the reality of the securities market in Algeria through

المزيد من المعلومات

Microsoft Word - Oracle1

Microsoft Word - Oracle1 360 Hòî b þa@âbè½ai Oracle Developer îôèm@æë í@áüèm pbíìn a@ ŠèÏ تعرف على ا وراكل تعرف على أوراكل... 4 مالمح ومزايا مجموعة التطوير 5...Oracle Developer Suite 10g أدوات تطوير التطبيقات...6 Oracle Developer

المزيد من المعلومات