KL Divergence Between Two Normal Distributions

Easy
Machine Learning

Task: Implement KL Divergence Between Two Normal Distributions

Your task is to compute the Kullback Leibler (KL) divergence between two normal distributions. KL divergence measures how one probability distribution differs from a second, reference probability distribution.

Write a function kl_divergence_normal(mu_p, sigma_p, mu_q, sigma_q) that calculates the KL divergence between two normal distributions.

The function should return the KL divergence as a floating point number.

Examples

Example 1:
Input: mu_p = 0.0 sigma_p = 1.0 mu_q = 1.0 sigma_q = 1.0 print(kl_divergence_normal(mu_p, sigma_p, mu_q, sigma_q))
Output: 0.5
Explanation: The KL divergence between the normal distributions \( P \) and \( Q \) with parameters \( \mu_P = 0.0 \), \( \sigma_P = 1.0 \) and \( \mu_Q = 1.0 \), \( \sigma_Q = 1.0 \) is 0.5.

Starter Code

import numpy as np

def kl_divergence_normal(mu_p, sigma_p, mu_q, sigma_q):
	return 0.0
Lines: 1Characters: 0
Ready
The AI Interview - Master AI/ML Interviews