Generative Adversarial Networks (GANs):
Explanation: GANs consist of two neural networks, the generator and the discriminator, which are trained simultaneously through a competitive process. The generator creates synthetic data samples, while the discriminator tries to distinguish between real and fake samples. As they compete, the generator improves at generating realistic data, while the discriminator gets better at distinguishing real from fake. This adversarial setup leads to the generation of high-quality synthetic data.
Application: GANs are widely used for generating realistic images, text, audio, and more. They have applications in art generation, data augmentation, and even drug discovery.
Example: Generating images of handwritten digits using a GAN.
Python Code (using TensorFlow and Keras):
Code: Select all
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
# Load and preprocess the MNIST dataset
(x_train, _), (_, _) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1).astype("float32") / 255.0
# Define the generator model
generator = keras.Sequential([
layers.Dense(7 * 7 * 64, input_shape=(100,), activation="relu"),
layers.Reshape((7, 7, 64)),
layers.Conv2DTranspose(64, kernel_size=3, strides=2, padding="same", activation="relu"),
layers.Conv2DTranspose(1, kernel_size=3, strides=2, padding="same", activation="sigmoid")
])
# Define the discriminator model
discriminator = keras.Sequential([
layers.Conv2D(64, kernel_size=3, strides=2, padding="same", input_shape=[28, 28, 1]),
layers.LeakyReLU(alpha=0.2),
layers.Dropout(0.5),
layers.Conv2D(128, kernel_size=3, strides=2, padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Dropout(0.5),
layers.Flatten(),
layers.Dense(1, activation="sigmoid")
])
# Combine generator and discriminator into a GAN
gan = keras.Sequential([generator, discriminator])
# Compile discriminator (for training separately)
discriminator.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5), loss="binary_crossentropy")
# Compile GAN
discriminator.trainable = False
gan.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5), loss="binary_crossentropy")
# Train the GAN
batch_size = 128
dataset = tf.data.Dataset.from_tensor_slices(x_train).shuffle(1000)
dataset = dataset.batch(batch_size, drop_remainder=True)
for epoch in range(num_epochs):
for batch in dataset:
# Train discriminator
noise = np.random.normal(0, 1, size=[batch_size, 100])
generated_images = generator.predict(noise)
real_images = batch
x = np.concatenate([real_images, generated_images])
y_real = np.ones((batch_size, 1))
y_fake = np.zeros((batch_size, 1))
discriminator_loss_real = discriminator.train_on_batch(real_images, y_real)
discriminator_loss_fake = discriminator.train_on_batch(generated_images, y_fake)
discriminator_loss = 0.5 * np.add(discriminator_loss_real, discriminator_loss_fake)
# Train generator
noise = np.random.normal(0, 1, size=[batch_size, 100])
y_gen = np.ones((batch_size, 1))
gan_loss = gan.train_on_batch(noise, y_gen)
Explanation: VAEs consist of an encoder and a decoder network. The encoder maps input data to a latent space, while the decoder reconstructs the data from the latent space. Unlike GANs, VAEs optimize a probabilistic model of the data and learn the underlying distribution of the data in the latent space. This allows for the generation of new data samples by sampling from the learned distribution.
Application: VAEs are used for generating images, text, and other data types. They are particularly useful for applications where explicit control over the generated data's attributes is required, such as in image editing or style transfer.
Example: Generating new images of digits using VAEs.
Python Code (using TensorFlow and Keras)
Code: Select all
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
# Load and preprocess the MNIST dataset
(x_train, _), (_, _) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1).astype("float32") / 255.0
# Define the encoder model
latent_dim = 2
encoder_inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(32, 3, activation="relu", strides=2, padding="same")(encoder_inputs)
x = layers.Conv2D(64, 3, activation="relu", strides=2, padding="same")(x)
x = layers.Flatten()(x)
z_mean = layers.Dense(latent_dim)(x)
z_log_var = layers.Dense(latent_dim)(x)
# Define sampling function
def sampling(args):
z_mean, z_log_var = args
epsilon = keras.backend.random_normal(shape=(keras.backend.shape(z_mean)[0], latent_dim), mean=0.0, stddev=1.0)
return z_mean + keras.backend.exp(0.5 * z_log_var) * epsilon
z = layers.Lambda(sampling)([z_mean, z_log_var])
encoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z])
# Define the decoder model
latent_inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(latent_inputs)
x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same")(x)
decoder_outputs = layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same")(x)
decoder = keras.Model(latent_inputs, decoder_outputs)
# Define VAE model
vae_outputs = decoder(encoder(encoder_inputs)[2])
vae = keras.Model(encoder_inputs, vae_outputs)
# Define loss function
def vae_loss(encoder_inputs, vae_outputs):
reconstruction_loss = keras.losses.binary_crossentropy(keras.backend.flatten(encoder_inputs), keras.backend.flatten(vae_outputs))
reconstruction_loss *= 28 * 28
kl_loss = 1 + z_log_var - keras.backend.square(z_mean) - keras.backend.exp(z_log_var)
kl_loss = keras.backend.sum(kl_loss, axis=-1)
kl_loss *= -0.5
return keras.backend.mean(reconstruction_loss + kl_loss)
# Compile and train VAE
vae.compile(optimizer=keras.optimizers.Adam(), loss=vae_loss)
vae.fit(x_train, x_train, epochs=10, batch_size=128)
Explanation: Autoregressive models, such as PixelCNN and WaveNet, model the conditional probability distribution of each data point given previous data points. They generate data sequentially, one element at a time, by sampling from the conditional distributions. Autoregressive models are capable of capturing complex dependencies within the data but can be computationally expensive to train and sample from.
Application: Autoregressive models are commonly used for generating high-resolution images, text, and audio.
Example: Generating text using a simple autoregressive language model.
Python Code
Code: Select all
import numpy as np
# Training data
text = "hello world"
chars = list(set(text))
char_to_idx = {ch: i for i, ch in enumerate(chars)}
idx_to_char = {i: ch for i, ch in enumerate(chars)}
data = [char_to_idx[ch] for ch in text]
# Define autoregressive model
order = 3
model = {}
for i in range(len(data) - order):
context = tuple(data[i:i + order])
next_char = data[i + order]
if context in model:
model[context].append(next_char)
else:
model[context] = [next_char]
# Generate new text
seed = data[:order]
generated_text = seed.copy()
for _ in range(10):
context = tuple(generated_text[-order:])
next_char = np.random.choice(model[context])
generated_text.append(next_char)
generated_text = [idx_to_char[idx] for idx in generated_text]
print("".join(generated_text))
Explanation: Transformers are a type of deep learning model originally developed for natural language processing tasks. They use self-attention mechanisms to capture long-range dependencies within the input data. Recently, transformer-based architectures like GPT (Generative Pre-trained Transformer) have been adapted for generative tasks by conditioning the generation process on a given prompt or context.
Application: Transformer-based models are used for generating text, code, and other sequence data. They have shown remarkable performance in natural language generation tasks such as text completion and story generation.
Example: Generating text using a pre-trained GPT-2 model.
Python Code (using Hugging Face's transformers library):
Code: Select all
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load pre-trained model and tokenizer
model_name = "gpt2-medium"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
# Generate text
prompt = "Once upon a time"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100, num_return_sequences=1, temperature=0.7)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Explanation: MCMC methods sample from a probability distribution by constructing a Markov chain that converges to the target distribution. In the context of generative AI, MCMC methods such as Gibbs sampling and Metropolis-Hastings algorithms are used to sample from complex probability distributions, enabling the generation of data instances.
Application: MCMC methods are used for generating samples from probabilistic models, particularly in Bayesian inference and statistical modeling.
Example: Generating text using a simple Markov chain.
Python Code:
Code: Select all
import random
# Training data
text = "hello world"
n = len(text)
order = 2
model = {}
# Build Markov chain model
for i in range(n - order):
context = text[i:i + order]
next_char = text[i + order]
if context in model:
model[context].append(next_char)
else:
model[context] = [next_char]
# Generate new text
seed = text[:order]
generated_text = seed
current_context = seed
for _ in range(10):
next_char = random.choice(model[current_context])
generated_text += next_char
current_context = generated_text[-order:]
print(generated_text)