Website Source: blog / elixr_blog
Summary
Pending synthesis from local website source.
Original source title: Elixr: What if PyTorch Used Complex Numbers?
Extracted Preview
Note: I had just finished Karpathy's micrograd video and understood, for the first time, exactly how backprop works. Then I started wondering: the math works over reals because derivatives are real. What happens if the values are complex? That question took a weekend to start answering, and I'm still not sure I have it right.
Elixr: What if PyTorch Used Complex Numbers?
- Project Home: [github.com/yash-srivastava19/Elixir](https://github.com/yash-srivastava19/Elixir)
- Language: Python
---
The starting point: micrograd
If you haven't watched Karpathy's [micrograd video](https://www.youtube.com/watch?v=VMj-3S1tku0), go watch it. It's one of the best explanations of how neural network training works at the mechanical level. The core idea: every mathematical operation in a forward pass can be represented as a node in a computation graph, and you can automatically compute gradients by traversing that graph in reverse and applying the chain rule.
Karpathy's micrograd does this for scalar real values. The Value class wraps a number, tracks which operations produced it, and stores a _backward closure that knows how to compute the local gradient contribution:
class Value:
def __init__(self, data):
## Integration Notes
- Source section: `blog`
- Local source: `/home/yashs/Desktop/Programming/yash_blog/yash-srivastava19.github.io/blog/elixr_blog.md`
- Raw copy: `raw/website/yash-srivastava19-github-io/blog/elixr_blog.md`
## Links Created Or Updated
## Open Questions