How can face recognition algorithms surpass humans in matching faces over changes in illumination? A. By using color filters that mimic human eye perception under low lightB. By enhancing sharpness through contrast adjustment in every imageC. By learning invariant features using deep neural networks trained on varied lighting conditionsD. By converting all face images into grayscale to remove lighting differences entirely
What is the difference between top-down (memoization) and bo…
What is the difference between top-down (memoization) and bottom-up (tabulation) approaches in Dynamic Programming (DP)? A. Top-down directly iterates over all possible states, while bottom-up uses recursion with no memory to reduce time complexity.B. Bottom-up solves problems by storing only final answers, while top-down builds an answer list through repeated trial-and-error.C. Top-down uses recursion and stores results as needed, while bottom-up builds solutions iteratively starting from base cases.D. Bottom-up randomly samples subproblems in reverse, whereas top-down processes each solution without storing previous answers.
Why is Supervised Learning Predominant in machine learning a…
Why is Supervised Learning Predominant in machine learning applications today? A. It offers high accuracy by learning directly from labeled data, making it easier to train models for real-world prediction and classification tasks.B. It operates without the need for labeled data, reducing human involvement and improving scalability for tasks involving massive unstructured datasets.C. It builds internal representations using reward signals from the environment, making it suitable for dynamic decision-making in real-time settings.D. It clusters data based on hidden structures without any prior labels, offering flexibility in discovering natural groupings within unknown datasets.
Why is Supervised Learning Predominant in machine learning a…
Why is Supervised Learning Predominant in machine learning applications today? A. It offers high accuracy by learning directly from labeled data, making it easier to train models for real-world prediction and classification tasks.B. It operates without the need for labeled data, reducing human involvement and improving scalability for tasks involving massive unstructured datasets.C. It builds internal representations using reward signals from the environment, making it suitable for dynamic decision-making in real-time settings.D. It clusters data based on hidden structures without any prior labels, offering flexibility in discovering natural groupings within unknown datasets.
What roles do pooling and activation functions play in Convo…
What roles do pooling and activation functions play in Convolutional Neural Networks (CNNs)? A. Pooling reduces spatial dimensions, and activation functions introduce non-linearity to help CNNs learn complex patterns in data. B. Pooling increases feature map size, and activation functions keep all values positive for stable weight initialization in CNNs. C. Pooling layers remove all redundant data, and activation functions ensure the network remains fully linear during learning phases. D. Pooling adds layers of noise to features, while activation functions randomly select which neurons are active in each layer.
The horizontal distance between two adjacent crests is calle…
The horizontal distance between two adjacent crests is called but the vertical distance between crest and trough is called .
What is Temporal Difference learning? A. Learning by averagi…
What is Temporal Difference learning? A. Learning by averaging full episode returns without updating during the episode.B. Learning by updating estimates using current reward plus estimated future value at each step.C. Learning by optimizing a policy using labeled supervised data from external sources.D. Learning through evolving populations of agents using random mutations and selection.
How do L1 and L2 regularization differ? A. L1 shrinks weight…
How do L1 and L2 regularization differ? A. L1 shrinks weights uniformly while L2 ignores small weights during optimizationB. L1 promotes sparsity by setting weights to zero, L2 reduces weights without eliminating themC. L1 and L2 both eliminate all neurons with small values to improve training performanceD. L1 and L2 are identical in behavior but applied to different layers during backpropagation
What are Heuristic Search Algorithms and their key character…
What are Heuristic Search Algorithms and their key characteristics? A. Algorithms that explore every possible path in the state space until the optimal solution is found through exhaustive brute-force search techniques.B. Algorithms that use fixed rules and logic tables to derive conclusions from given data without considering alternative or estimated paths.C. Algorithms that randomly explore the solution space without guidance, relying purely on probability to reach near-optimal solutions over time.D. Algorithms that use domain-specific knowledge or estimates to guide search toward goal states more efficiently than uninformed methods.
What is Transfer Learning? A. Training a model from scratch…
What is Transfer Learning? A. Training a model from scratch on a new task using only the new dataset availableB. Using knowledge from a pretrained model on one task to improve learning on a related taskC. Combining predictions from multiple models trained on different datasets into one outputD. Splitting a dataset into parts and training different models on each part independently