Custom Samplers¶
You can build your own sampler by subclassing Sampler. A model is automatically generated from logp. The sampler is also initialized with a state generated from start and the arguments of logp. With these, you define the step method, which should generate one sample and return a state.
As an example, here’s snippet from the Metropolis sampler.
from sampyl import Sampler
from sampyl import np
class Metropolis(Sampler):
    def __init__(self, logp, start, **kwargs):
        # No gradient is needed, so set it to None, and the flag to False
        super(Metropolis, self).__init__(logp, start, None, grad_logp_flag=False, **kwargs)
    def step(self):
        """ Perform a Metropolis-Hastings step. """
        x = self.state
        y = proposal(x, self.scale)
        if accept(x, y, self.model.logp):
            self.state = y
        return self.state
def proposal(state, scale):
    proposed = State.fromkeys(state.keys())
    for i, var in enumerate(state):
        proposed.update({var: np.random.normal(state[var], scale[var])})
    return proposed
def accept(x, y, logp):
    delp = logp(y) - logp(x)
    if np.isfinite(delp) and np.log(np.random.uniform()) < delp:
        return True
    else:
        return False
- 
class 
sampyl.Sampler(logp, start, grad_logp=None, scale=None, condition=None, grad_logp_flag=True, random_seed=None)¶ Attributes
Methods
- 
sample(num, burn=0, thin=1, n_chains=1, progress_bar=True)¶ Sample from \(P(X)\)
Parameters: - num – int. Number of samples to draw from \(P(X)\).
 - burn – (optional) int. Number of samples to discard from the beginning of the chain.
 - thin – (optional) float. Thin the samples by this factor.
 - n_chains – (optional) int. Number of chains to return. Each chain is given its own process and the OS decides how to distribute the processes.
 - progress_bar – (optional) boolean. Show the progress bar, default = True.
 
Returns: Record array with fields taken from arguments of logp function.
-