SECTION A QUESTION1 Consider the following points on a Cartesian plane:
SECTION B QUESTION 4 The points A (-4;0), B (3;7) and C…
SECTION B QUESTION 4 The points A (-4;0), B (3;7) and C (4;-1) form a triangle on the Cartesian plane as shown in the figure below Right-click to open the figure in a new tab 4.1 Prove that triangle ABC is an isosceles triangle (5) 4.2 Find the co-ordinates of D (D lies on AB) (4) 4.3 Determine if DC
QUESTION 5 A pilot is landing a plane. He must make sure…
QUESTION 5 A pilot is landing a plane. He must make sure that the plane is constantly kept equidistant (equal distances) from the two outer landing lights and . The line between the two landing lights is perpendicular to the runway. The co-ordinates of the landing lights are and . Find the equation of his flight path in the form (Hint: find the gradient of the flight path) Right-click to open the figure in a new tab
Let us say the cost of a False Positive is 10 units and the…
Let us say the cost of a False Positive is 10 units and the cost of a False negative is 2 units. In this context answer the following. (a) (6) How would you adjust the decision tree algorithm so that its performance on unseen cases minimizes the total expected cost instead of maximizing the accuracy? (b) (6) Recall the Support Vector Machine formulation discussed in class, specifically the case in which we minimize the cost of misclassifications using a constant parameter C. Suggest a solution for learning an SVM classifier in which the cost of the two types of errors are different. Do not write any formulas, describe your ideas in language.
Answer the following in the context of the Adaboost algorith…
Answer the following in the context of the Adaboost algorithm. (No formulas, only language description). (a) (4) Which points are given higher/lower weights after learning each weak-classifier? (b) (4) How is the weight for each data point used by the algorithm? (c) (4) How do we assign weights to weak classifiers for their contribution in the global decision?
(10) Consider the context of selecting the best attribute fo…
(10) Consider the context of selecting the best attribute for decision tree construction. Explain briefly the difference between “information gain” and “gain ratio” metrics for selecting the best attributes. Is any one of these two better than the other – explain why? Do not write any formulas. Explain in language only.
Consider the following training data for a perceptron: X. Y…
Consider the following training data for a perceptron: X. Y. Z. Class 0. 3. 5. 1 1. 4. 8. 0 7. 1. 2. 1 -1. 5. 5. 0 2. 6. 7. 0 Use (2 3 1 4) as the initial weight vector. Execute the perceptron training algorithm as discussed in class and report the following: 1. (4) The updated weight vector after the second data point, (1,4,8), is processed. 2. (4) The updated weight vector after the third data point, (7, 1, 2), is processed. 3. (6) The updated weight vector after the fifth data point, (2, 6, 7), is processed.
Consider the following data points and their class labels: […
Consider the following data points and their class labels: , , , , . We want to use the distance weighted 3-NN classification with this data. Each point at a distance of d has a vote of (1/d) for predicting thee class label. (a) (8) For the query point (4, 4, 4) find and report the data points in its 3-nearest-neighbor list, their weights for decision, and the class label that should be assigned to this query point. (b) (6) What is gained and what is lost when we increase the value of K in a K-NN classifier?
(10) Credit card fraud detection systems operate at very low…
(10) Credit card fraud detection systems operate at very low precision values, say 5%, for the class of fraudulent transactions. Why is this still a good idea? Explain with a small example.
Consider the following three data vectors. D1: (4, 6, 7, 9),…
Consider the following three data vectors. D1: (4, 6, 7, 9), D2:(6, 9, 10, 14), and D3: (4, 6, 2, 1). (a) (3) What are the Manhattan distances for the data-point pairs: D1-D2: D1-D3: D2-D3: (b) (5) What are the cosine similarities for the data-point pairs: D1-D2: D1-D3: D2-D3: (c) (2) Which two pints are the closest as per the Manhattan distance? (d) (2) Which two points are the closest as per the Cosine similarity?