Show HN: I built a hardware processor that runs Python
(www.runpyxl.com)
zik 3 hours ago This is a very cool project but I feel like the claim is overstated: "PyXL is a custom hardware processor that executes Python directly — no interpreter, no JIT, and no tricks. It takes regular Python code and runs it in silicon."Reading further down the page it says you have to compile the python code using CPython, then generate binary code for its custom ISA. That's neat, but it doesn't "execute python directly" - it runs compiled binaries just like any other CPU. You'd use the same process to compile for x86, for example. It certainly doesn't "take regular python code and run it in silicon" as claimed.
A more realistic claim would be "A processor with a custom architecture designed to support python".
goranmoomin 1 hours ago Not related to the project in any way, but I would say that if the hardware is running on CPython bytecode, I’d say that’s as far as it can get for executing Python directly – AFAIK running python code with the `python3` executable also compiles Python code into bytecode `*.pyc` files before it runs it. I don’t think anyone claims that CPython is not running Python code directly…hamandcheese 38 minutes ago I agree with you, if it ran pyc code directly I would be okay saying it "runs python".However it doesn't seem like it does, the pyc still had to be further processed into machine code. So I also agree with the parent comment that this seems a bit misleading.
I could be convinced that that native code is sufficiently close to pyc that I don't feel misled. Would it be possible to write a boot loader which converts pyc to machine code at boot? If not, why not?
rytill 19 minutes ago The phrasing “<statement> — no X, Y, Z, just <final simplified claim>” is cropping up a lot lately.4o also ends many of its messages that way. It has to be related.
Y_Y 15 hours ago Are there any limitations on what code can run? (discounting e.g. memory limitations and OS interaction)I'd love to read about the design process. I think the idea of taking bytecode aimed at the runtime of dynamic languages like Python or Ruby or even Lisp or Java and making custom processors for that is awesome and (recently) under-explored.
I'd be very interested to know why you chose to stay this, why it was a good idea, and how you went about the implementation (in broad strokes if necessary).
hwpythonner 14 hours ago Thanks — really appreciate the interest!There are definitely some limitations beyond just memory or OS interaction. Right now, PyXL supports a subset of real Python. Many features from CPython are not implemented yet — this early version is mainly to show that it's possible to run Python efficiently in hardware. I'd prefer to move forward based on clear use cases, rather than trying to reimplement everything blindly.
Also, some features (like heavy runtime reflection, dynamic loading, etc.) would probably never be supported, at least not in the traditional way, because the focus is on embedded and real-time applications.
As for the design process — I’d love to share more! I'm a bit overwhelmed at the moment preparing for PyCon, but I plan to post a more detailed blog post about the design and philosophy on my website after the conference.
mikepurvis 12 hours ago In terms of a feature-set to target, would it make sense to be going after RPython instead of "real" Python? Doing that would let you leverage all the work that PyPy has done on separating what are the essential primitives required to make a Python vs what are the sugar and abstractions that make it familiar:ammar2 10 hours ago > I'd prefer to move forward based on clear use casesTaking the concrete example of the `struct` module as a use-case, I'm curious if you have a plan for it and similar modules. The tricky part of course is that it is implemented in C.
Would you have to rewrite those stdlib modules in pure python?
mikepurvis 10 hours ago As in my sibling comment, pypy has already done all this work.CPython's struct module is just a shim importing the C implementations: https://github.com/python/cpython/blob/main/Lib/struct.py
Pypy's is a Python(-ish) implementation, leveraging primitives from its own rlib and pypy.interpreter spaces: https://github.com/pypy/pypy/blob/main/pypy/module/struct/in...
The Python stdlib has enormous surface area, and of course it's also a moving target.
ammar2 8 hours ago Aah, neat! Yeah, piggy-backing off pypy's work here would probably make the most sense.It'll also be interesting to see how OP deals with things like dictionaries and lists.
bokchoi 3 hours ago There were a few chips that supported directly executing JVM bytecodes. I'm not sure why it didn't take off, but I think it is generally more performant to JIT compile hotspots to native code.checker659 8 hours ago Forth CPU (in SystemVerilog): https://www.youtube.com/watch?v=DRtSSI_4dvkhermitShell 13 hours ago JVM I think I can understand, but do you happen to know more about LISP machines and whether they use an ISA specifically optimized for the language, or if the compilers for x86 end up just doing the same thing?In general I think the practical result is that x86 is like democracy. It’s not always efficient but there are other factors that make it the best choice.
kragen 8 hours ago They used an ISA specifically optimized for the language. At the time it was not known how to make compilers for Lisp that did an adequate job on normal hardware.The vast majority of computers in the world are not x86.
hwpythonner 16 hours ago I built a hardware processor that runs Python programs directly, without a traditional VM or interpreter. Early benchmark: GPIO round-trip in 480ns — 30x faster than MicroPython on a Pyboard (at a lower clock). Demo: https://runpyxl.com/gpiojonjacky 3 hours ago A much earlier (2012) attempt at a Python bytecode interpreter on an FPGA:"Running a very small subset of python on an FPGA is possible with pyCPU. The Python Hardware Processsor (pyCPU) is a implementation of a Hardware CPU in Myhdl. The CPU can directly execute something very similar to python bytecode (but only a very restricted instruction set). The Programcode for the CPU can therefore be written directly in python (very restricted parts of python) ..."
obitsten 15 hours ago Why is it not routine to "compile" Python? I understand that the interpreter is great for rapid iteration, cross compatibility, etc. But why is it accepted practice in the Python world to eschew all of the benefits of compilation by just dumping the "source" file in production?cchianel 15 hours ago The primary reason, in my opinion, is the vast majority of Python libraries lack type annotations (this includes the standard library). Without type annotations, there is very little for a non-JIT compiler to optimize, since:- The vast majority of code generation would have to be dynamic dispatches, which would not be too different from CPython's bytecode.
- Types are dynamic; the methods on a type can change at runtime due to monkey patching. As a result, the compiler must be able to "recompile" a type at runtime (and thus, you cannot ship optimized target files).
- There are multiple ways every single operation in Python might be called; for instance `a.b` either does a __dict__ lookup or a descriptor lookup, and you don't know which method is used unless you know the type (and if that type is monkeypatched, then the method that called might change).
A JIT compiler might be able to optimize some of these cases (observing what is the actual type used), but a JIT compiler can use the source file/be included in the CPython interpreter.
hwpythonner 14 hours ago You make a great point — type information is definitely a huge part of the challenge.I'd add that even beyond types, late binding is fundamental to Python’s dynamism: Variables, functions, and classes are often only bound at runtime, and can be reassigned or modified dynamically.
So even if every object had a type annotation, you would still need to deal with names and behaviors changing during execution — which makes traditional static compilation very hard.
That’s why PyXL focuses more on efficient dynamic execution rather than trying to statically "lock down" Python like C++.
pjmlp 13 hours ago Solved by Smalltalk, Self, and Lisp JITs, that are in the genesis of JIT technology, some of it landed on Hotspot and V8.dragonwriter 11 hours ago Python starting with 3.13 also has a JIT available.pjmlp 11 hours ago Kind of, you have to compile it yourself, and is rather basic, still early days.PyPy and GraalPy is where the fun is, however they are largely ignored outside their language research communities.
jonathaneunice 13 hours ago "Addressed" or "mitigated" perhaps. Not "solved." Just "made less painful" or "enough less painful that we don't need to run screaming from the room."pjmlp 13 hours ago Versus what most folks do with CPython, it is indeed solved.We are very far from having a full single user graphics workstation in CPython, even if those JITs aren't perfect.
Yes, there are a couple of ongoing attempts, while most in the community rather write C extensions.
jonathaneunice 11 hours ago Is "single user graphics workstation" even still a goal? Great target in the Early to Mid Ethernetian when Xerox Dorados and Dandelions, Symbolics, and Liliths roamed the Earth. Doesn't feel like a modern goal or standard of comparison.I used those workstations back in the day—then rinsed and repeated with JITs and GCs for Self, Java, and on to finally Python in PyPy. They're fantastic! Love having them on-board. Many blessings to Deutsch, Ungar, et al. But for 40 years JIT's value has always been to optimize away the worst gaps, getting "close enough" to native to preserve "it's OK to use the highest level abstractions" for an interesting set of workloads. A solid success, but side by side with AOT compilation of closer-to-the-machine code? AOT regularly wins, then and now.
"Solved" should imply performance isn't a reason to utterly switch languages and abstractions. Yet witness the enthusiasm around Julia and Rust e.g. specifically to get more native-like performance. YMMV, but from this vantage, seeing so much intentional down-shift in abstraction level and ecosystem maturity "for performance" feels like JIT reduced but hardly eliminated the gap.
kragen 8 hours ago "Single-user graphical workstation" may not be a great goal anymore, but it's at least a sobering milestone to keep failing to reach.AFAIK there isn't an AOT compiler from JVM bytecode to native code that's competitive with either HotSpot or Graal, which are JIT compilers. But the JVM semantics are much less dynamic than Python or JS, whose JIT compilers don't perform nearly as well. Even Jython compiled to JVM bytecode and JITted with HotSpot is pretty slow.
However, LuaJIT does seem to be competitive with AOT-compiled C and with HotSpot, despite Lua being just as dynamic as Python and more so than JS.
pjmlp 11 hours ago It is solved to the point the users on those communities are not writing extensions in C all the time, to compensate for the interpreter implementation.AOT winning over JITs on micro benchmarks hardly wins in meaningful way for most business applications, especially when JIT caches and with PGO data sharing across runs is part of the picture.
Sure there are always going to be use cases that require AOT, and in most of them is due to deployment constraints, than anything else.
Most mainstream devs don't even know how to use PGO tooling correctly from their AOT toolchains.
Heck, how many Electron apps do you have running right now?
Qem 12 hours ago > We are very far from having a full single user graphics workstation in CPython, even if those JITs aren't perfect.Some years ago there was an attempt to create a linux distribution including a Python userspace, called Snakeware. But the project went inactive since then. See https://github.com/joshiemoore/snakeware
pjmlp 12 hours ago I fail to find anything related to have a good enough performance for a desktop system written in Python.homarp 8 hours ago Sugar is built with python
Qem 12 hours ago > The primary reason, in my opinion, is the vast majority of Python libraries lack type annotations (this includes the standard library).When type annotations are available, it's already possible to compile Python to improve performance, using Mypyc. See for example https://blog.glyph.im/2022/04/you-should-compile-your-python...
Someone 15 hours ago Python doesn’t eschew all benefits of compilation. It is compiled, but to an intermediate byte code, not to native code, (somewhat) similar to the way java and C# compile to byte code.Those, at runtime (and, nowadays, optionally also at compile time), convert that to native code. Python doesn’t; it runs a bytecode interpreter.
Reason Python doesn’t do that is a mix of lack of engineering resources, desire to keep the implementation fairly simple, and the requirement of backwards compatibility of C code calling into Python to manipulate Python objects.
jerf 14 hours ago If you define "compiling Python" as basically "taking what the interpreter would do but hard-coding the resulting CPU instructions executed instead of interpreting them", the answer is, you don't get very much performance improvement. Python's slowness is not in the interpreter loop. It's in all the things it is doing per Python opcode, most of which are already compiled C code.If you define it as trying to compile Python in such a way that you would get the ability to do optimizations and get performance boosts and such, you end up at PyPy. However that comes with its own set of tradeoffs to get that performance. It can be a good set of tradeoffs for a lot of projects but it isn't "free" speedup.
jonathaneunice 13 hours ago A giant part of the cost of dynamic languages is memory access. It's not possible, in general, to know the type, size, layout, and semantics of values ahead of time. You also can't put "Python objects" or their components in registers like you can with C, C++, Rust, or Julia "objects." Gradual typing helps, and systems like Cython, RPython, PyPy etc. are able to narrow down and specialize segments of code for low-level optimization. But the highly flexible and dynamic nature of Python means that a lot of the work has to be done at runtime, reading from `dict` and similar dynamic in-memory structures. So you have large segments of code that are accessing RAM (often not even from caches, but genuine main memory, and often many times per operation). The associated IO-to-memory delays are HUGE compared to register access and computation more common to lower-level languages. That's irreducible if you want Python semantics (i.e. its flexibility and generality).Optimized libraries (e.g. numpy, Pandas, Polars, lxml, ...) are the idiomatic way to speed up "the parts that don't need to be in pure Python." Python subsets and specializations (e.g. PyPy, Cython, Numba) fill in some more gaps. They often use much tighter, stricter memory packing to get their speedups.
For the most part, with the help of those lower-level accelerations, Python's fast enough. Those who don't find those optimizations enough tend to migrate to other languages/abstractions like Rust and Julia because you can't do full Python without the (high and constant) cost of memory access.
wyldfire 2 hours ago For python, compilation means emitting some bytecode. And you could conceivably ship that bytecode *. But because it's so terribly dynamic of a language, virtually nothing is bound to anything until you execute this particular line. "What code does this function call resolve to?" -- we'll find out when we get there. "What type does this local use?" -- we'll find out when we get there.Even type annotations would have to be anointed with semantics, which (IIUC) they have none today (w/CPython AFAIK). They are just annotations for use by static checkers.
Unless you can perform optimizations, the compilation can't make a whole bunch of progress beyond that bytecode.
* In fact, IIRC there was/is some "freeze" program that would do just that: compile your python program. Under the covers it would bundle libpython with your *.pyc bytecode.
franga2000 15 hours ago There's no benefit that I know of, besides maybe a tiny cold start boost (since the interpreter doesn't need to generate the bytecode first).I have seen people do that for closed-source software that is distributed to end-users, because it makes reverse engineering and modding (a bit) more complicated.
Qem 13 hours ago Check Nuitka: https://nuitka.net/dragonwriter 11 hours ago > Why is it not routine to "compile" Python?Where’s the AOT compiler that handles the whole Python language?
It’s not routine because its not even an option, and people who are concerned either use the tools that let them compile a subset of Python within a larger, otherwise-interpreted program, or use a different language.
hwpythonner 15 hours ago There have been efforts (like Cython, Nuitka, PyPy’s JIT) to accelerate Python by compiling subsets or tracing execution — but none fully replace the standard dynamic model at least as far as I know.ModernMech 14 hours ago Part of the issue is the number of instructions Python has to go through to do useful work. Most of that is unwrapping values and making sure they're the right type to do the thing you want.For example if you compile x + y in C, you'll get a few clean instructions that add the data types of x and y. But if you compile this thing in some sort of Python compiler it would essentially have to include the entire Python interpreter; because it can't know what x and y are at compile time, there necessarily has to be some runtime logic that is executed to unwrap values, determine which "add" to call, and so forth.
If you don't want to include the interpreter, then you'll have to add some sort of static type checker to Python, which is going to reduce the utility of the language and essentially bifurcate it into annotated code you can compile, and unannotated code that must remain interpreted at runtime that'll kill your overall performance anyway.
That's why projects like Mojo exist and go in a completely different direction. They are saying "we aren't going to even try to compile Python. Instead we will look like Python, and try to be compatible, but really we can't solve these ecosystem issues so we will create our own fast language that is completely different yet familiar enough to try to attract Python devs."
kragen 7 hours ago You don't need the whole Python interpreter to fall back to dynamic method dispatch for overloaded operators. CPython itself implements them with per-interface vtables for C extensions, very similar to Golang but laboriously constructed by hand.For most code, you don't need static typing for most overloaded operators to get decent performance, either. From my experience with Ur-Scheme, even a simple prediction that most arithmetic is on (small) integers with a runtime typecheck and conditional jump before inlining the integer version of each arithmetic operation performs remarkably well—not competitive with C but several times faster than CPython. It costs you an extra conditional branch in the case where the type is something else, but you need that check anyway if you are going to have unboxed integers, and it's smallish compared to the call and return you'll need once you find the correct overload to call. (I didn't implement overloading in Ur-Scheme, just exiting with an error message.)
Even concatenating strings is slow enough that checking the tag bits to see if you are adding integers won't make it much slower.
Where this approach really falls down is choosing between integer and floating point math. (Also, you really don't want to box your floats.)
And of course inline caches and PICs are well-known techniques for handling this kind of thing efficiently. They originated in JIT compilers, but you can use them in AOT compilers too; Ian Piumarta showed that.
seanw444 13 hours ago It's called Nim.archargelod 1 hours ago Comparing Nim to compiled Python is almost insulting.Smaller binaries, faster execution, proper metaprogramming, actual type safety, and you don't need to bundle a whole interpreter just to say "hello world"
boutell 13 hours ago This is very, very cool. Impressive work.I'm interested to see whether the final feature set will be larger than what you'd get by creating a type-safe language with a pythonic syntax and compiling that to native, rather than building custom hardware.
The background garbage collection thing is easier said than done, but I'm talking to someone who has already done something impressively difficult, so...
rangerelf 9 hours ago > I'm interested to see whether the final feature set will be larger than what you'd get by creating a type-safe language with a pythonic syntax and compiling that to native, rather than building custom hardware.It almost sounds like you're asking for Nim ( https://nim-lang.org/ ); and there are some projects using it for microcontroller programming, since it compiles down to C (for ESP32, last I saw).
rkagerer 12 hours ago Back when C# came out, I thought for sure someone would make a processor that would natively execute .Net bytecode. Glad to see it finally happened for some language.kcb 11 hours ago For Java, this was around for a bit https://en.wikipedia.org/wiki/Jazelle.monocasa 11 hours ago Even better was a complete system rather than a mode for arm processors that ran a subset of the common jvm opcodes.varispeed 11 hours ago Didn't some phones have hardware Java execution or does my memory fail me?Sesse__ 10 hours ago It's called Jazelle.lodovic 10 hours ago Sun tried to build one too, they called it the JavaChip iirc. It was meant for JavaStations, kiosk machines, and mobile phones but it never took off. https://en.wikipedia.org/wiki/Java_processor
supportengineer 10 hours ago Does anyone remember the JavaOne ring giveaway?jiehong 11 hours ago Java got that with smart cards for example. Cute oddities of the pastmonocasa 11 hours ago JavaCard was just implemented as just a regular interpreter last time I checked.
zahlman 10 hours ago In university, for my undergrad thesis, I wanted to do this for a Befunge variant (choosing the character set to simplify instruction decoding). My supervisor insisted on something more practical, though. :(zahlman 8 hours ago I probably should have added a link: https://esolangs.org/wiki/BefungeThe main thing that appealed to me about this idea is that it would require a two-dimensional program counter. As I recall from the original specification, skipping through blank space is supposed to take O(1) time, but I didn't plan on implementing that. I did, however, imagine a machine with 256x256 bytes of memory, where some 80x25 (or 24?) region was reserved as directly memory-mapped to a character display (and protected at boot by surrounding it with jump instructions).
ComputerGuru 10 hours ago I want to say there was a product that did this circa 2006-2008 but all I’m finding is the .NET Micro Framework and its modern successor the .NET nano Framework.I’ve been using .NET since 2001 so maybe I have it confused with something else, but at the same time a lot of the web from that era is just gone, so it’s possible something like this did exist but didn’t gain any traction and is now lost to the ether.
duskwuff 6 hours ago There was Netduino, but that was a STM32 microcontroller running an interpreter, not dedicated hardware which directly executed CLR code.rcorrear 10 hours ago Maybe you’re thinking of Singularity OS?
john-h-k 9 hours ago The tl;dr (I spent lots of time investigating this) is that it just fundamentally isn’t a good bytecode for execution. It’s designed to be small on disk, not hardware friendlywhoomp12342 11 hours ago I'd be surprised if azure app services didn't do this already.john-h-k 11 hours ago I’d be willing to bet my net worth that they don’twhoomp12342 8 hours ago then why does azure app services have you pick the .net version?!john-h-k 7 hours ago I can't tell if this is joke but will assume not. It's because the .net version is needed for some reason. There are not processors that run .net bytecode, primarily because they would be slower and worse (and again, don't exist)
actionfromafar 11 hours ago Wouldn't that be a real scoop?bongodongobob 11 hours ago Azure runs on Linux if I'm not mistaken.ggiesen 3 hours ago Nope.bongodongobob 31 minutes ago Can you tell me how I'm misunderstanding this?https://en.m.wikipedia.org/wiki/Azure_Linux?utm_source=chatg...
sunray2 5 hours ago Very interesting!What's the fundamental physical limits here? Namely, timing precision, latency and jitter? How fast could PyXL bytecode react to an input?
For info, there is ARTIQ: vaguely similar thing that effectively executes Python code with 'embedded level' performance:
https://m-labs.hk/experiment-control/artiq/
ARTIQ is quite common in quantum physics labs. For that you need very precise and determining timing. Imagine you're interfering two photons as they reach a piece of glass, so that they can interact. It doesn't get faster than photons! That typically means nanosecond timing, sub-microsecond latency.
How ARTIQ does it is also interesting. The Python code is separate from the FPGA which actually executes the logic you want to do. In a hand-wavy way, you're then 'as fast' as the FPGA. How, though? The catch is, you have to get the Python code and FPGA gateware talking to each other, and that's technically difficult and has many gotchas. In comparison, although PyXL isn't as performant, if it makes it simpler for the user, that's a huge win for everyone.
Congrats once again!
rthomas6 15 hours ago * What HDL did you use to design the processor?* Could you share the assembly language of the processor?
* What is the benefit of designing the processor and making a Python bytecode compiler for it, vs making a bytecode compiler for an existing processor such as ARM/x86/RISCV?
hwpythonner 15 hours ago Thanks for the question.HDL: Verilog
Assembly: The processor executes a custom instruction set called PySM (Not very original name, I know :) ). It's inspired by CPython Bytecode — stack-based, dynamically typed — but streamlined to allow efficient hardware pipelining. Right now, I’m not sharing the full ISA publicly yet, but happy to describe the general structure: it includes instructions for stack manipulation, binary operations, comparisons, branching, function calling, and memory access.
Why not ARM/X86/etc... Existing CPUs are optimized for static, register-based compiled languages like C/C++. Python’s dynamic nature — stack-based execution, runtime type handling, dynamic dispatch — maps very poorly onto conventional CPUs, resulting in a lot of wasted work (interpreter overhead, dynamic typing penalties, reference counting, poor cache locality, etc.).
pak9rabid 13 hours ago Wow, this is fascinating stuff. Just a side question (and please understand I am not a low-level hardware expert, so pardon me if this is a stupid question): does this arch support any sort of speculative execution, and if so do you have any sort of concerns and/or protections in place against the sort of vulnerabilities that seem to come inherent with that?hwpythonner 13 hours ago Thanks — and no worries, that’s a great question!Right now, PyXL runs fully in-order with no speculative execution. This is intentional for a couple of reasons: First, determinism is really important for real-time and embedded systems — avoiding speculative behavior makes timing predictable and eliminates a whole class of side-channel vulnerabilities. Second, PyXL is still at an early stage — the focus right now is on building a clean, efficient architecture that makes sense structurally, without adding complex optimizations like speculation just for the sake of performance.
In the future, if there's a clear real-world need, limited forms of prediction could be considered — but always very carefully to avoid breaking predictability or simplicity.
ammar2 7 hours ago > it includes instructions for stack manipulation, binary operationsYour example contains some integer arithmetic, I'm curious if you've implemented any other Python data types like floats/strings/tuples yet. If you have, how does your ISA handle binary operations for two different types like `1 + 1.0`, is there some sort of dispatch table based on the types on the stack?
kragen 7 hours ago Python the language isn't stack-based, though CPython's bytecode is. You could implement it just as well on top of a register-based instruction set. You may have a point about the other features that make it hard to compile, though.tlb 8 hours ago How do you deal with instructions that iterate through variable amounts of memory, like concatenating strings? Are such instructions interruptible?Perhaps they don't need to be interruptible if there's no virtual memory.
How does it allocate memory? Malloc and free are pretty complex to do in hardware.
larusso 11 hours ago This sounds like your ‚arch‘ (sorry don‘t 100% know the correct term here) could potentially also run ruby/js if the toolchain can interpret it into your assembly language?hwpythonner 10 hours ago Good question — I’m not 100% sure. I'm not an expert on Ruby or JS internals, and I haven’t studied their execution models deeply. But in theory, if the language is stack-based (or can be mapped cleanly onto a stack machine), and if the ISA is broad enough to cover their needs, it could be possible. Right now, PyXL’s ISA is tuned around Python’s patterns — but generalizing it for other languages would definitely be an interesting challenge.larusso 9 hours ago I assume Lua would fit the bill then definitely.Edit: Just want to mention that this sounds like a super interesting project. I have to admit that I struggled to see where python was run on the hardware when mentioning custom toolchains and a compilation step. But the important aspect is that your hardware runs this similar to how a vm would run it with all dynamic aspects of the language included. I wonder similar to a parent comment if something similar for wasm would be worth having.
_kb 5 hours ago Extending that, WASM execution could be interesting to explore.
thenobsta 14 hours ago Amazing work! This is a great project!Every time I see a project that has a great implementation on an FPGA, I lament the fact that Tabula didn’t make it, a truly innovative and fast FPGA.
froh 15 hours ago Do I get this right? this is an ASIC running a python-specific microcontroller which has python-tailored microcode? and together with that a python bytecode -> microcode compiler plus support infrastructure to get the compiled bytcode to the asic?fun :-)
but did I get it right?
hwpythonner 15 hours ago You're close: It's currently running on an FPGA (Zynq-7000) — not ASIC yet — but yeah, could be transferable to ASIC (not cheap though :))It's a custom stack-based hardware processor tailored for executing Python programs directly. Instead of traditional microcode, it uses a Python-specific instruction set (PySM) that hardware executes.
The toolchain compiles Python → CPython Bytecode → PySM Assembly → hardware binary.
cchianel 15 hours ago As someone who did a CPython Bytecode → Java bytecode translator (https://timefold.ai/blog/java-vs-python-speed), I strongly recommend against the CPython Bytecode → PySM Assembly step:- CPython Bytecode is far from stable; it changes every version, sometimes changing the behaviour of existing bytecodes. As a result, you are pinned to a specific version of Python unless you make multiple translators.
- CPython Bytecode is poorly documented, with some descriptions being misleading/incorrect.
- CPython Bytecode requires restoring the stack on exception, since it keeps a loop iterator on the stack instead of in a local variable.
I recommend instead doing CPython AST → PySM Assembly. CPython AST is significantly more stable.
hwpythonner 13 hours ago Thanks — really appreciate your insights.You're absolutely right that CPython bytecode changes over time and isn’t perfectly documented — I’ve also had to read the CPython source directly at times because of unclear docs.
That said, I intentionally chose to target bytecode instead of AST at this stage. Adhering to the AST would actually make me more vulnerable to changes in the Python language itself (new syntax, new constructs), whereas bytecode changes are usually contained to VM-level behavior. It also made it much easier early on, because the PyXL compiler behaves more like a simple transpiler — taking known bytecode and mapping it directly to PySM instructions — which made validation and iteration faster.
Either way, some adaptation will always be needed when Python evolves — but my goal is to eventually get to a point where only the compiler (the software part of PyXL) needs updates, while keeping the hardware stable.
cchianel 10 hours ago CPython bytecode changes behaviour for no reason and very suddenly, so you will be vulnerable to changes in Python language versions. A few from the top of my head:- In Python 3.10, jumps changed from absolute indices to relative indices
- In Python 3.11, cell variables index is calculated differently for cell variables corresponding to parameters and cell variables corresponding to local variables
- In Python 3.11, MAKE_FUNCTION has the code object at the TOS instead of the qualified name of the function
For what it's worth, I created a detailed behaviour of each opcode (along with example Python sources) here: https://github.com/TimefoldAI/timefold-solver/blob/main/pyth... (for up to Python 3.11).
nurettin 15 hours ago This was my first thought as well. They will be stuck at a certain python version
bangaladore 11 hours ago Have you considered joining the next tiny tapeout run? This is exactly the type of project I'm sure they would sponsor or try to get to asic.In case you weren't aware, they give you 200 x 150 um tile on a shared chip. There is then some helper logic to mux between the various projects on the chip.
relistan 15 hours ago Not an ASIC, it’s running on an FPGA. There is an ARM CPU that bootstraps the FPGA. The rest of what you said is about right.
JadoJodo 10 hours ago I'd like to invite any Python devs to go on a tangent with me:Can you give me the scoop on Python, the language? I see things like this project, and it seems very impressive, but being an outsider to the language, I don't "get" it. More specifically: I'm curious to hear thoughts on a) what made this difficult prior to now (with Python), b) why Python is useful for this, and c) what are your thoughts on Python itself?
To add some more context:
I know a lot of developers who work with Python (Flask); Some love it, some hate it (as with any language). My experience has been mainly via homelab/OSS tools that all seem to embrace the language. And yet while the language itself seems very straight forward and easy to use, my experience with the Python _ecosystem_ (again, as an outsider) has been... difficult.
Python 2 vs 3, virtual environments, libraries for each version, etc. It feels as though anytime I've had to use it outside a pre-built Docker container, these issues result in throwing spaghetti at the wall trying to figure out how to even get it working at all. As a PHP/Go dev, it's one of the languages for which I could see myself having a real interest, but this has so far made me hesitant (and I don't want to be).
spprashant 9 hours ago The gist is that basic Python at its very core is -a) simple b) limited
The language really took off when developers took this simple limited language and pushed it to its very limits using C extensions. The data science explosion opened up the language to a very wide user base.
So to answer your 3 questions: a) Python is not a fast language by any means. There is a lot of overhead in every function call that makes it almost impossible for low latency/real-time use cases. b) I don't think Python is particularly the best language for this. This is just a demonstration of someone building their own custom toolchain to show what is possible with just pure Python. The author has highlighted why they think this is interesting on the website. c) I keep thinking Python will go away soon, and we will see a much better alternative. But the reality is Python is entrenched deeply just like JavaScript. Lot of smart people are putting in a lot of effort to make it better. Personally the ecosystem and packaging story does not annoy me much, but the lack of proper threading (GIL) has hurt my projects more than once.
For your particular pain point, the current community recommended solution is to use uv (https://github.com/astral-sh/uv). There were several detours (pip, pyenv, pipenv, poetry etc.) the community took before they got behind this.
miohtama 7 hours ago Before data science Python was already heavily used in web backend e.g. Instagram, others.spprashant 4 hours ago Yeah true and I think it was heading on a Ruby-like trajectory. It was the data science/ML trend that really cemented it's status.
PaulHoule 9 hours ago My impression was that if you had a problem with Python and then added Docker now you have two problems. I worked at one place where the data sci's had an amazing ability to find defective Pythons.Python is going in the right directions in terms of all the deployability and big issues but it should have been where it is now 7 years ago. Specifically, I sketched out a system that worked like uv but was written in pure Python, I didn't start on it for two reasons: (a) the bootstrapping problem that I couldn't ever stop devs from trashing the Python that it runs in, and (b) from lots of trying it didn't seem possible to convince most Pythoners that pip was broken or that it mattered... uv solved (a) by removing Python from the bootstrap and (b) by being crazy fast.
__MatrixMan__ 5 hours ago There are parts of python that chafe, but if I switch to a language which has solved those problems, the set of people I can help falls to... very small. These are people we fought tooth and nail to drag away from excel, we're not going to get them all the way to haskell.em3rgent0rdr 6 hours ago b: while Python is not a high-performance language, python coding is easier than high-performance languages. And programmer time is valuable. But if after coding a project in python, the developer may then find that they need higher performance than what interpreted python offers, and thus might be tempted to redo their program in a high-performance language. But a non-interpreted python processor provides a more appealing alternative to just spend money on an FPGA (or in the future maybe even an ASIC) python co-processor which may be fast enough, rather than wasting programmer time porting their python code to a high-performance language.whatnow37373 9 hours ago Old-timer here, used Python for about ten years professionally (Go now).c) It’s a monstrous dumpster fire and getting worse over time, but so is everything else (in the same space). I like Go, but I can see how it’s not for everyone.
TheFlyingFish 9 hours ago I've used Python a lot over the last ~10 years. It's probably my favorite language, although I'm not immune to its weak points.To answer your questions in order,
a) I haven't done much work with embedded Python, but like any dynamically-typed language that runs in a VM there's a lot of runtime infrastructure that adds latency, complexity, energy consumption, bundle size, etc. It sounds like this project aims to remove the vast majority of that. So take startup time, for instance: Normal Python takes ~50ms to fire up the interpreter and get into actual user code. If I'm understanding it correctly, with PyXL that would be vastly lower. Although I guess the ARM chip still has to load the code onto the FPGA, so maybe not, idk.
b) and c) are kind of the same question, to me - at least, "why use Python for embedded" is a subset of "why use Python at all."
For me, Python more than any other language is great at getting out of its own way, so that you can spend your precious brain energy on whatever problem you're solving and less on the tool you're using to solve it. This is maybe less true in recent years, as later Pythons have added a lot more complex features (like async/await, for instance, which I actually really like in Python but definitely adds complexity to the language).
Finally, I think a lot of it comes down to personal style/taste/chance (i.e. if Python is the first language you encounter, you're probably more likely to end up liking Python.) The Zen of Python[0], which you may have seen, does a good job of explaining the Python way of approaching problems, although like I said a few of those principles have been less-rigidly adhered to in recent years (like "there should be only one way to do it.")
If you hang out in Python circles, you'll probably come across the phrase "Python fits your brain." I'm not sure where it was originally coined but it very definitely describes my experience with Python: it (mostly) just works like I expect it to, whether that's with regard to syntax, semantics, stdlib, etc.
Not that it doesn't have its bad points, of course. Dependency management, as you mentioned, can be a bit hellish at times. A lot of it comes down to the fact that dependencies in Python were originally conceived as systemwide state, much like dynamically-loaded C libs on Linux. This works fine until you need to use two different, mutually-incompatible versions of the same lib, at which point all hell breaks loose. There have been various attempts to improve on this more recently, so far uv[1] looks pretty promising, but time will tell.
The one saving grace of Python dependencies is that it has a very rich standard library, so the average Python project tends to have way fewer total dependencies than the average project in, say, JS or Rust.
The typing story for Python is also a bit lacking. Yes, there are now optional type hints and things like MyPy to make use of them, but even if your own code is all completely typed, in my experience it's usually not long before you need to call out to something that isn't well-typed and then your whole house of cards starts to fall apart.
Anyway, just my rambling $0.02.
JadoJodo 8 hours ago Not all rambling, but the exact kind of input I was hoping for. Thank you!
willvarfar 9 hours ago Yeah python has become more and more version and deps hell. Honestly 3 was all cost and no benefit and we'd all be fine if we'd stuck with 2. There were also some early missteps in api design like async and pandas and matplotlib that we all now have to live with. I even ran into problems with PIL changing API for textsize recently. Just a thousand cuts.And yet for simple little standalone programs and notebooks, particularly for science, it is super simple and natural to turn to it.
nonameiguess 8 hours ago Factors I personally think led to Python's popularity:1) Perl kind of shooting itself in the foot 20 years ago and Python becoming the de facto scripting language for Linux distributions that needed to do anything more complicated than was suitable for shell scripts but didn't require entirely new compiled software projects.
2) The above meant Python is almost always available and a good tool to have handy if you need to do something one-off and simple but more complicated than what you can do with a built-in calculator app. For instance, ever curious if you can pull the exponents off of x509 certificates and manually verify signatures by hand? Pretty easy to do in Python.
3) The C API and compiled modules made it possible to link against pre-existing BLAS implementations, and the extensible syntax and user-defined operators made it possible to mimic the style of MATLAB and R. Thus, Python became a popular choice as a lingua franca for engineers, scientists, and stats geeks who just wanted to do some data exploration or modeling and weren't trying to create shippable software.
4) MIT decided to make Python its primary teaching language in the early 2000s or so and a lot of CS programs in the US followed suit.
5) It became possible at some point to write Microsoft Office macros in Python, giving marginally technical business types a nice option to learn that was more broadly useful than VB script to automate their own workflows.
Why it ever became so popular among actual software developers I have a harder time answering, but for research, exploratory work, prototyping, scripting, workflow automation, it's as good as anything else you can come up with, usually already available, and it has an extremely "batteries included" standard library that means you probably don't need to worry about the kind of ecosystem dependency hell you're envisioning here.
Possibly some factors include the rise of LeetCode, as Python's "executable pseudocode" style means it is very easy to find or translate examples of algorithm implementations into Python solutions for learning, and the fact that a large trend of the post big data era is trying to turn exploratory data analysis pipelining tasks into real software, along with people who used to brand themselves as "data scientists" deciding to become software developers instead, and already knowing Python.
Python also gives you a pretty good first order approximation of a solution when you want to turn some researcher's data model into a service, provided your app is also written in Python. This has become far less important these days with data APIs, ML APIs, standardized formats for model serialization, but previously, a very popular solution to the so-called "two language problem" was just making Python fast enough to let it be both languages itself rather than trying to add web app frameworks to Julia.
VWWHFSfQ 9 hours ago Python is just brutally slow. Anything performance-sensitive has to be done with a native module and now that requires all the same compilation and build tooling that everything else does.The ecosystem is massive and the core team just keeps adding more and more dubious language features and syntax.
Realistically, Python should have been "done" after async/await and fixing str vs bytes.
carabiner 9 hours ago This just seems like a complaint about python package management disguised as a question (aka concern trolling). Yes it's bad. No, it probably won't be improved any time soon.JadoJodo 8 hours ago That wasn't my intention at all, but I appreciate that it came across that way to you. Please know that I was/am sincere in my desire to hear the thoughts of others while this is a current topic.
yanniszark 12 hours ago Great work! :D I had a question about that though. Instead of compiling to PySM, why not compile directly to a real assembly like ARM? Is the PySM assembly very special to accomodate python features in a way that can't be done efficiently in existing architectures like ARM?hwpythonner 11 hours ago Thanks — appreciate it!Good question. In theory, you can compile anything Turing-complete to anything else — ARM and Python are both Turing-complete. But practically, Python's model (dynamic typing, deep use of the stack) doesn't map cleanly onto ARM's register-based, statically-typed instruction set. PySM is designed to match Python’s structure much more naturally — it keeps the system efficient, simpler to pipeline, and avoids needing lots of extra translation layers.
Jean-Papoulos 15 hours ago >PyXL is a custom hardware processor that executes Python directly — no interpreter, no JIT, and no tricks. It takes regular Python code and runs it in silicon.So, no using C libraries. That takes out a huge chunck of pip packages...
hwpythonner 15 hours ago You're absolutely right — today, PyXL only supports pure Python execution, so C extensions aren’t directly usable.That said, in future designs, PyXL could work in tandem with a traditional CPU core (like ARM or RISC-V), where C libraries execute on the CPU side and interact with PyXL for control flow and Python-level logic.
There’s also a longer-term possibility of compiling C directly to PyXL’s instruction set by building an LLVM backend — allowing even tighter integration without a second CPU.
Right now the focus is on making native Python execution viable and efficient for real-time and embedded systems, but I definitely see broader hybrid models ahead.
bieganski 14 hours ago it would be nice to have some peripheral drivers implemented (UART, eMMC etc).having this, the next tempting step is to make `print` function work, then the filesystem wrapper etc.
btw - what i'm missing is a clear information of limitations. it's definitely not true that i can take any Python snippet and run it using PyXL (for example threads i suppose?)
hwpythonner 14 hours ago Great points!Peripheral drivers (like UART, SPI, etc.) are definitely on the roadmap - They'd obviously be implemented in HW. You're absolutely right — once you have basic IO, you can make things like print() and filesystem access feel natural.
Regarding limitations: you're right again. PyXL currently focuses on running a subset of real Python — just enough to show it's real python and to prove the core concept, while keeping the system small and efficient for hardware execution. I'm intentionally holding off on implementing higher-level features until there's a real use case, because embedded needs can vary a lot, and I want to keep the system tight and purpose-driven.
Also, some features (like threads, heavy runtime reflection, etc.) will likely never be supported — at least not in the traditional way — because PyXL is fundamentally aimed at embedded and real-time applications, where simplicity and determinism matter most.
throwup238 10 hours ago Are you planning on licensing the IP core? It would be great to have your core integrated with ESP32, running alongside their other architectures, so they can handle the peripheral integration, wifi, and Python code loading into your core, while it sits as another master on the same bus as the other peripherals.Do you plan to have AMBA or Wishbone Bus support?
hwpythonner 8 hours ago Thanks — yes, licensing is something I'm open to exploring in the future.PyXL already communicates with the ARM side over AXI today (Zynq platform).
willvarfar 14 hours ago Fantastic work! :D Must be super-satisfying to get it up and running! :DIs it tied to a particular version of python?
hwpythonner 14 hours ago Thanks — it’s definitely been incredibly satisfying to see it run on real hardware!Right now, PyXL is tied fairly closely to a specific CPython version's bytecode format (I'm targeting CPython 3.11 at the moment).
That said, the toolchain handles translation from Python source → CPython bytecode → PyXL Assembly → hardware binary, so in principle adapting to a new Python version would mainly involve adjusting the frontend — not reworking the hardware itself.
Longer term, the goal is to stabilize a consistent subset of Python behavior, so version drift becomes less painful.
hoistbypetard 3 hours ago It seems worth noting that the board you're comparing it to costs <$30 where the dev board you're running on costs $250+.That said... awesome work! I wish I could get to PyCon this year to see your talk.
Are you planning to post your core so others can replicate your work?
kristianpaul 9 hours ago This always mede think back to J1 Forth CPU https://excamera.com/files/j1.pdfwodenokoto 15 hours ago I can totally see a future where you can select “accelerated python” as an option for your AWS lambda code.hwpythonner 14 hours ago When I first started PyXL, this kind of vision was exactly on my mind.Maybe not AWS Lambda specifically, but definitely server-side acceleration — especially for machine learning feature generation, backend control logic, and anywhere pure Python becomes a bottleneck.
It could definitely get there — but it would require building a full-scale deployment model and much broader library and dynamic feature support.
That said, the underlying potential is absolutely there.
petra 14 hours ago This sounds brilliant.What's missing so you could create a demo for vc's or the relevant companies , proving the potential of this as competitive server-class core ?
hwpythonner 14 hours ago Good question!PyXL today is aimed more at embedded and real-time systems.
For server-class use, I'd need to mature heap management, add basic concurrency, a simple network stack, and gather real-world benchmarks (like requests/sec).
That said, I wouldn’t try to fully replicate CPython for servers — that's a very competitive space with a huge surface area.
I'd rather focus on specific use cases where deterministic, low-latency Python execution could offer a real advantage — like real-time data preprocessing or lightweight event-driven backends.
When I originally started this project, I was actually thinking about machine learning feature generation workloads — pure Python code (branches, loops, dynamic types) without heavy SIMD needs. PyXL is very well suited for that kind of structured, control-flow-heavy workload.
If I wanted to pitch PyXL to VCs, I wouldn’t aim for general-purpose servers right away. I'd first find a specific, focused use case where PyXL's strengths matter, and iterate on that to prove value before expanding more broadly.
noosphr 13 hours ago I need to bit bang the RHS2116 at 25MHz: https://intantech.com/files/Intan_RHS2116_datasheet.pdfRight now I'm doing this with a dsl with an fpga talking to a computer.
Does your python implementation let you run at speeds like that?
If yes, is there any overhead left for dsp - preferably fp based?
nynx 11 hours ago This is cool for sure. I think you’ll ultimately find that this can’t really be faster than modern OoO cores because python instructions are so complex. To execute them OoO or even at a reasonable frequency (e.g. to reduce combinatorial latency), you’ll need to emit type-specialized microcode on the fly, but you can’t do that until the types are known — which is only the case once all the inputs are known for python.hwpythonner 11 hours ago Thanks — appreciate it!You're right that dynamic typing makes high-frequency execution tricky, and modern OoO cores are incredibly good at hiding latencies. But PyXL isn't trying to replace general-purpose CPUs — it's designed for efficient, predictable execution in embedded and real-time systems, where simplicity and determinism matter more than absolute throughput. Most embedded cores (like ARM Cortex-M and simple RISC-V) are in-order too — and deliver huge value by focusing on predictability and power efficiency. That said, there’s room for smart optimizations even in a simple core — like limited lookahead on types, hazard detection, and other techniques to smooth execution paths. I think embedded and real-time represent the purest core of the architecture — and once that's solid, there's a lot of room to iterate upward for higher-end acceleration later.
IshKebab 11 hours ago Very cool! Nobody who really wants simplicity and determinism is going to be using Python on a microcontroller though.rangerelf 8 hours ago That's funny, there's a huge community of people doing just that: https://circuitpython.org/awesomeactionfromafar 10 hours ago Hm, why not though. People managed to do it with tiny JVMs before, so why not a Python variant.IshKebab 8 hours ago Java is statically typed and a lot saner than Python, and JavaCard is a fairly restricted subset. Apparently real cards don't typically support garbage collection.IMO JavaCard doesn't really make sense either. There's clearly space for another language here, though I suspect most people would much rather just use Rust than learn a new language.
actionfromafar 6 hours ago That's fair, except a little reminder that for most people Rust is the new language. :)
gavinsyancey 11 hours ago Sure, but for embedded use cases (which this is targeting), the goal isn't raw speed so much as being fast enough for specific use cases while minimizing power usage / die area / cost.
swoorup 15 hours ago How does garbage collection work here? Are they just set of PySM code?hwpythonner 15 hours ago GC is still a WIP, but the key idea is the system won't stall — garbage collection happens asynchronously, in the background, without interrupting PyXL execution.jy14898 13 hours ago Sounds similar to something one of my classmates worked on at uni https://www.bristol.ac.uk/research/groups/trustworthy-system...
M4R5H4LL 10 hours ago I love this kind of project, this is wonderful work. I guess the challenge is to now make it work for general purpose Python. In any case it looks very much like a marketable product already. I would seek financing to see how far this can go.fluorinerocket 12 hours ago Makes me think of LabVIEW FPGA, where you could run LabVIEW code directly on FPGA, more like generate vhdl or verilog from LabVIEW, and do very high loop rate deterministic control systems. Very cool. Except with that you were locked down to the national instruments ecosystem and no one really used it.I
tgtweak 13 hours ago Have you tested it on any faster FPGAs? I think Azure has instances with xilinx/AMD accelerators paired.>Standard_NP10s instance, 1x AMD Alveo U250 FPGA (64GB)
Would be curious to see how this benchmarks on a faster FGPA since I imagine clock frequency is the latency dictator - while memory and tile can determine how many instances can run in parallel.
hwpythonner 12 hours ago Not yet — I'm currently testing on a Zynq-7000 platform (embedded-class FPGA), mainly because it has an ARM CPU tightly integrated (and it's rather cheap). I use the ARM side to handle IO and orchestration, which let me focus the FPGA fabric purely on the Python execution core, without having to build all the peripherals from scratch at this stage.To run PyXL on a server-class FPGA (like Azure instances), some adaptations would be needed — the system would need to repurpose the host CPU to act as the orchestrator, handling memory, IO, etc.
The question is: what's the actual use case of running on a server? Besides testing max frequency -- for which I could just run Vivado on a different target (would need license for it though)
For now, I'm focusing on validating the core architecture, not just chasing raw clock speeds.
zoobab 9 hours ago You can get cheap Zynq boards on Aliexpress, like old mining boards.I have a Paralella board here with a Zynq.
focusgroup0 8 hours ago Incredible work. This is a paradigm shift for ML and embedded workflows. And congratulations, you are going to ring the bell with this one.hwpythonner 8 hours ago Thank you so much — that really means a lot!It's still early days and there’s a lot more work ahead, but I'm very excited about the possibilities.
I definitely see areas like embedded ML and TinyML as a natural fit — Python execution on low-power devices opens up a lot of doors that weren't practical before.
IlikeKitties 14 hours ago Is this running on an FPGA or were you able to fab a custom chip?hwpythonner 13 hours ago Just running on FPGA at the moment.This is still an early-stage project — it's not completed yet, and fabricating a custom chip would involve huge costs.
I'm a solo developer worked on this in my spare time, so FPGA was the most practical way to prove the core concepts and validate the architecture.
Longer term, I definitely see ASIC fabrication as the way to unlock PyXL’s full potential — but only once the use case is clear and the design is a little more mature.
IlikeKitties 13 hours ago Oh, my comment wasn't meant as a criticism just curiosity because I would have been extremely surprised to see such a project being fabricated.I find the idea of a processor designed for a specific very high level language quite interesting. What made you choose python and do you think it's the "correct" language for such a project? It sure seems convenient as a language but I wouldn't have thought it is best suited for that task due to the very dynamic nature of it. Perhaps something like Nim which is similar but a little less dynamic would be a better choice?
jamesfmilne 13 hours ago Could be a candidate for Tiny Tapeout in the future.ActorNightly 10 hours ago Im not super versed in hardware, but whats the reason you can't adapt this to run on an ARM microprocessor chip? Why go with FPGA?Like if I could buy a Cortex board and write Python, hit compile, and have the thing run, this would be INSANELY useful to me, cause cortex chips have pretty great A/D converters for sensing.
throwawaymaths 13 hours ago there are several free asic shuttle runs available for hobbyists iirc
jrexilius 15 hours ago Amazing work! Is the primary goal here to allow more production use of python in an embedded context, rather than just prototyping?hwpythonner 15 hours ago Thank you! And yes, exactly.
actinium226 11 hours ago So first of all, this is awesome and props to you for some great work.I have what may be a dumb question, but I've heard that Lua can be used in embedded contexts, and that it can be used without dynamic memory allocation and other such things you don't want in real time systems. How does this project compare to that? And like I said it's likely a dumb question because I haven't actually used Lua in an embedded context but I imagine if there's something there you've probably looked at it?
woodrowbarlow 10 hours ago with embedded scripting languages (including lua and micropython) the CPU is running a compiled interpreter (usually written in C, compiled to the CPU's native architecture) and the interpreter is running the script. on PyXL, the CPU's native architecture is python bytecode, so there's no compiled interpreter.
pjmlp 14 hours ago This is kind of cool, basically a Python Machine. :)boutell 13 hours ago I see what you did there! There's a LISP Machine with its guts on display at the MIT Museum. I recall we had one in the graduate student comp sci lab at University of Delaware (I was a tolerated undergrad). By then LISP was faster on a Sun workstation, but someone had taught it to play Tetris.
echoangle 11 hours ago Would this be able to handle an exec()- or eval()-call? Is there a Python byte code compiler available as python byte code to include in this processor?IshKebab 11 hours ago Yeah this is surely a subset of Python.