NHacker Next
login
▲Self-hosted x86 back end is now default in debug modeziglang.org
197 points by brson 11 hours ago | 99 comments
Loading comments...
Retro_Dev 10 hours ago [-]
As far as I know, Zig has a bunch of things in the works for a better development experience. Almost every day there's something being worked on - like https://github.com/ziglang/zig/pull/24124 just now. I know that Zig had some plans in the past to also work on hot code swapping. At this rate of development, I wouldn't be surprised if hot code swapping was functional within a year on x86_64.

The biggest pain point I personally have with Zig right now is the speed of `comptime` - The compiler has a lot of work to do here, and running a brainF** DSL at compile-time is pretty slow (speaking from experience - it was a really funny experiment). Will we have improvements to this section of the compiler any time soon?

Overall I'm really hyped for these new backends that Zig is introducing. Can't wait to make my own URCL (https://github.com/ModPunchtree/URCL) backend for Zig. ;)

AndyKelley 8 hours ago [-]
For comptime perf improvements, I know what needs to be done - I even started working on a branch a long time ago. Unfortunately, it is going to require reworking a lot of the semantic analysis code. Something that absolutely can, should, and will be done, but is competing with other priorities.
titzer 2 hours ago [-]
For Virgil I went through three different compile-time interpreters. The first walked a tree-like IR that predated SSA. Then, after SSA, I designed a linked-list-like representation specifically for interpretation speed. After dozens of little discrepancies between this custom interpreter and compile output, I finally got rid of it and wrote an interpreter that works directly on the SSA intermediate representation. In the worst case, the SSA interpreter is only 2X slower than the custom interpreter. In the best case, it's faster, and saves a translation step. I feel it is worth it because of the maintenance burden and bugs.
lenkite 5 hours ago [-]
Thank you for working so hard on Zig. Really looking forward to Zig 1.0 taking the system programming language throne.
Imustaskforhelp 1 hours ago [-]
I am not sure, but why can't C,Rust and Zig with others (like Ada,Odin etc.) and of course C++ (how did I forget it?) just coexist.

Not sure why but I was definitely getting some game of thrones vibes from your comment and I would love to see some competition but I don't know, Just code in whatever is productive to you while being systems programming language I guess.

But I don't know low level languages so please, take my words at 2 cents.

lmm 2 minutes ago [-]
The industry has the resources to sustain maybe two and a half proper IDEs with debuggers, profilers etc.. So much as we might wish otherwise, language popularity matters. The likes of LSP mitigate this to a certain extent, but at the moment they only go so far.
brabel 1 hours ago [-]
I am just watching the Game of Throne series right now, so this comment sounds funnier than it should to me :D.

The fight for the Iron Throne, lots of self-proclaimed kings trying to take it... C is like King Joffrey, Rust is maybe Robb Stark?! And Zig... probably princess Daenerys with her dragons.

9d 8 hours ago [-]
Have you considered hiring people to help you with these tasks so you can work in parallel and get more done quicker?
AndyKelley 7 hours ago [-]
It's a funny question because, as far as I'm aware, Zig Software Foundation is the only organization among its peers that spends the bulk of its revenue directly paying contributors for their time - something I'm quite proud of.
9d 7 hours ago [-]
Oh so then you're already doing that. Well then that's fine, the tasks will get done when they get done then.
sali0 3 hours ago [-]
URCL is sending me down a rabbithole. Haven't looked super deeply yet, but the most hilarious timeline would be that an IR built for Minecraft becomes a viable compilation target for languages.
bgthompson 9 hours ago [-]
Hot code swapping will be huge for gamedev. The idea that Zig will basically support it by default with a compiler flag is wild. Try doing that, clang.
modernerd 31 minutes ago [-]
I ended up switching from Zig to C# for a tiny game project because C# already supports cross-platform hot reload by default. (It’s just `dotnet watch`.) Coupled with cross-compilation, AOT compilation and pretty good C interop, C# has been great so far.
pjmlp 1 hours ago [-]
Visual C++ and tools like Live++ have been doing it for years.

Maybe people should occasionally move away from their UNIX and vi ways.

Retro_Dev 9 hours ago [-]
Totally agree with that - although even right now zig is excellent for gamedev, considering it's performant, uses LLVM (in release modes), can compile REALLY FAST (in debug mode), it has near-seamless C integration, and the language itself is really pleasant to use (my opinion).
sgt 2 hours ago [-]
Is Zig actually being used for real game dev already?
AndyKelley 2 hours ago [-]
here's one: https://store.steampowered.com/app/2156410/Konkan_Coast_Pira...
baq 2 hours ago [-]
Why more games aren’t being developed in lisp is… perhaps not beyond me, but game development missed a turn a couple times.
pjmlp 1 hours ago [-]
That is basically what they do when using Lua, Python, C#, Java, but with less parenthesis, which apparently are too scary for some folks, moving from print(x) to (print x).

There was a famous game with Lisp scripting, Abuse, and Naughty Dog used to have Game Oriented Assembly Lisp.

baq 1 hours ago [-]
I had exactly the same title in mind, remember my very young self being in shock when I learned that it was lisp. If you didn't look under the hood you'd never be able to tell, it just worked.
dnautics 4 hours ago [-]
Is it easy to build out a custom backend? I haven't looked at it yet but I'd like to try some experiments with that -- to be specific, I think that I can build out a backend that will consume AIR and produce a memory safety report. (it would identify if you're using undefined values, stack pointer escape, use after free, double free, alias xor mut)
ww520 7 hours ago [-]
Is comptime slowness really an issue? I'm building a JSON-RPC library and heavily relying on comptime to be able to dispatch a JSON request to arbitrary function. Due to strict static typing, there's no way to dynamically dispatch to a function with arbitrary parameters in runtime. The only way I found was figuring the function type mapping during compile time using comptime. I'm sure it will blow up the code size with additional copies of the comptimed code with each arbitrary function.
Okx 6 hours ago [-]
Yes, last time I checked, Zig's comptime was 20x slower than interpreted Python. Parsing a non-trivial JSON file at comptime is excrutiatingly slow and can take minutes.
squeek502 3 hours ago [-]
Relevant: https://github.com/ziglang/zig/issues/4055#issuecomment-1646...
fransje26 49 minutes ago [-]
> Parsing a non-trivial JSON file at comptime is excrutiatingly slow

Nevertheless, impressive that you can do so!

whinvik 27 minutes ago [-]
As a complete noob what is the advantage of Zig over other languages? I believe it's a more modern C but what is the modern part?
flohofwoe 16 minutes ago [-]
Let me google that for you:

https://ziglang.org/documentation/0.14.1/

bgthompson 10 hours ago [-]
This is already such a huge achievement, yet as the devlog notes, there is plenty more to come! The idea of a compiler modifying only the parts of a binary that it needs to during compilation is simultaneously refreshing and totally wild, yet now squarely within reach of the Zig project. Exciting times ahead.
9d 8 hours ago [-]
> For a larger project like the Zig compiler itself, it takes the time down from 75 seconds to 20 seconds. We’re only just getting started.

Excited to see what he can do with this. He seems like a really smart guy.

What's the package management look like? I tried to get an app with QuickJS + SDL3 working, but the mess of C++ pushed me to Rust where it all just works. Would be glad to try it out in Zig too.

stratts 7 hours ago [-]
Package management in Zig is more manual than Rust, involving fetching the package URL using the CLI, then importing the module in your build script. This has its upsides - you can depend on arbitrary archives, so lots of Zig packages of C libraries are just a build script with a dependency on a unmodified tarball release. But obviously it's a little trickier for beginners.

SDL3 has both a native Zig wrapper: https://github.com/Gota7/zig-sdl3

And a more basic repackaging on the C library/API: https://github.com/castholm/SDL

For QuickJS, the only option is the C API: https://github.com/allyourcodebase/quickjs-ng

Zig makes it really easy to use C packages directly like this, though Zig's types are much more strict so you'll inevitably be doing a lot of casting when interacting with the API

LAC-Tech 5 hours ago [-]
It's also worth pointing out that the Zig std library covers a lot more than the rust one. No need for things like rustix, rand, hashbrown, and a few others I always have to add whenever I do rust stuff.
nindalf 4 hours ago [-]
You add hashbrown as an explicit dependency? The standard library HashMap is a re-export of hashbrown. Doesn’t it work for you?
vlovich123 4 hours ago [-]
Can’t speak for the op but there’s a number of high performance interfaces that avoid redundant computations that are only available directly from hashbrown.
LAC-Tech 4 hours ago [-]
huh, does it? I always add it so I can make non-deterministic hashmaps in rust. oh and you need one more one for the hashing function I think.

But I did not know hashmap re-exported hashbrown, thanks.

nindalf 2 hours ago [-]
Yep, they’re the same since Rust 1.36 (Jul 2019) - https://blog.rust-lang.org/2019/07/04/Rust-1.36.0/
LAC-Tech 2 hours ago [-]
https://doc.rust-lang.org/std/?search=hashbrown

looks like there's no way to access it, outside of hashmap.

Though maybe you just need the third party hasher and you can call with_hasher.

IDK man there's a lot going on with rust.

WalterBright 3 hours ago [-]
The dmd D compiler can compile itself (debug build):

real 0m18.444s user 0m17.408s sys 0m1.688s

On an ancient processor (it runs so fast I just never upgraded it):

cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ stepping : 2 cpu MHz : 2299.674 cache size : 512 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes

AndyKelley 3 hours ago [-]
18s eh? we're looking at 15s in https://github.com/ziglang/zig/pull/24124

oh, and by the way that includes the package manager, so the compile time accounts for:

* HTTP

* TLS (including aegis-128l, aegis-256, aes128-gcm, aes256-gcm, chacha20poly1305)

* deflate, zstd, and xz

* git protocol

WalterBright 2 hours ago [-]
Nice to hear from you, Andrew! I assume you're using a machine newer than 15 years ago :-)

I suppose it would compile faster if I didn't have symbolic debug info turned on.

Anyhow, our users often use dmd for development because of the high speed turnaround, and gdc/ldc for deployment with their more optimized code gen.

AndyKelley 2 hours ago [-]
You too! Yeah I think that was a great call. I took inspiration from D for sure when aiming for this milestone that we reached today.

Some people say you should use an old computer for development to help you write faster code. I say you should use a new computer for development, and write the fastest code you possibly can by exploiting all the new CPU instructions and optimizing for newer caching characteristics.

WalterBright 2 hours ago [-]
I'm still in the camp of using computers our users tend to have.

Also, self-compile times are strongly related to how much code there is in the compiler, not just the compile speed.

I also confess to being a bit jaded on this. I've been generating code from 8086 processors to the latest. Which instructions and combinations are faster is always flip-flopping around from chip to chip. So I leave it to the gdc/ldc compilers for the top shelf speed, and just try to make the code gen bulletproof and do a solid job.

Working on the new AArch64 has been quite fun. I'll be doing a presentation on it later in the summer. My target machine is a Raspberry Pi, which is a great machine.

Having the two code generators side by side also significantly increased the build times, because it's a lot more code being compiled.

AndyKelley 2 hours ago [-]
Fair enough, and yeah I hear you on the compilation cost of all the targets. We don't have aarch64 yet but in addition to x86_64 we do have an LLVM backend, a C backend, a SPIR-V backend, WebAssembly backend, RISC-V backend, and sparc backend. All that plus the stuff I mentioned earlier in 15s on a modern laptop.
candrewlee 2 hours ago [-]
15s is fast, wow.

Do you have any metrics on which parts of the whole compiler, std, package manager, etc. take the longest to compile? How much does comptime slowness affect the total build time?

throwawaymaths 4 hours ago [-]
im stunned that zig can compile itself in 75 seconds (even with llvm)
pjmlp 1 hours ago [-]
We used to have such fast compile times with Turbo Pascal, and other dialects, Modula-2, Oberon dialects, across 16 bit and early 32 bit home computers.

Then everything went south, with the languages that took over mainstream computing.

faresahmed 3 minutes ago [-]
Not to disagree with you, but even C++ is going through great efforts to improve compile-times through C++20 modules and C++23 standard library modules (import std;). Although no compiler fully supports both, you can get an idea of how they can improve compile-times with clang and libc++

    $ # No modules
    $ clang++ -std=c++23 -stdlib=libc++ a.cpp # 4.8s
    $ # With modules
    $ clang++ -std=c++23 -stdlib=libc++ --precompile -o std.pcm /path/to/libc++/v1/std.cppm # 4.6s but this is done once
    $ clang++ -std=c++23 -stdlib=libc++ -fmodule-file=std=std.pcm b.cpp # 1.5s 
a.cpp and b.cpp are equivalent but b.cpp does `import std;` and a.cpp imports every standard C++ header file (same thing as import std, you can find them in libc++' std.cppm).

Notice that this is an extreme example since we're importing the whole standard library and is actually discouraged [^1]. Instead you can get through the day with just these flags: `-stdlib=libc++ -fimplicit-modules -fimplicit-module-maps` and of course -std=c++20 or later, no extra files/commands required! but you are only restricted to doing import <vector>; and such, no import std.

[^1]: non-standard headers like `bits/stdc++.h` which does the same thing (#including the whole standard library) is what is actually discouraged because a. non-standard and b. compile-times, but I can see `import std` solving these two and being encouraged once it's widely available!

treeshateorcs 10 hours ago [-]
so, a helloworld program (`zig init`) is 9.3MB compiled. compared to `-Doptimize=ReleaseSmall` 7.6KB that is huge (more than 1000 times larger)
AndyKelley 10 hours ago [-]
Indeed, good observation. Another observation is that 82% of that is debug info.

-OReleaseSmall -fno-strip produces a 580K executable, while -ODebug -fstrip produces a 1.4M executable.

zig's x86 backend makes for a significantly better debugging experience with this zig-aware lldb fork: https://github.com/ziglang/zig/wiki/LLDB-for-Zig

I don't recall whether it supports stepping through comptime logic at the moment; that was something we discussed recently.

treeshateorcs 3 hours ago [-]
is it naive to expect the new backend to release -OReleaseSmall binaries as small as llvm in the future?
squeek502 3 hours ago [-]
As far as I'm aware, using the self-hosted backend for anything other than Debug mode is a goal, but a far-future goal.

I believe the most relevant links are https://github.com/ziglang/zig/issues/16270 and https://github.com/orgs/ziglang/projects/2/views/1?pane=issu... (as you can see, nothing is concrete yet, just vague mentions of optimization passes)

9d 8 hours ago [-]
[flagged]
mirekrusin 6 hours ago [-]
Sounds like Julia should consider switching to Zig to get considerable performance gains. I remember authors feeling uneasy with each llvm release worrying about performance degradations.
patagurbon 4 hours ago [-]
Julia is effectively hard locked to LLVM. Large swathes of the ecosystem rely on the presence of LLVM either for intrinsics, autodiff (Enzyme) or gpu compilation. Nevermind Base and Core.

The compiler is fairly retargetable, this is an active area of work. So it’s maybe possible in the future to envision zig as an alternative compiler for fragments of the language.

bobbylarrybobby 5 hours ago [-]
Isn't LLVM considered part of Julia’s public API? You've got macros like @code_llvm that actually give you IR
jakobnissen 3 hours ago [-]
That could be a way to get compile times down, but I think there is still much to do on the Julia side.

Such as a more fine grained compile cache, better tooling to prevent i validations, removal of the world splitting optimisation, more use of multithreading in the compiler, automatic precompilation of concrete signatures, and generation of lazier code which hot-swaps in code when it is compiled.

candrewlee 3 hours ago [-]
This is awesome for Zig, I think this direction is gonna be a primary differentiator when comparing to Rust.

And hey, I wrote a lot of the rendering code for that perf analyzer. Always fun to see your work show up on the internet.

https://github.com/andrewrk/poop

foresto 5 hours ago [-]
Isn't this one of the preconditions for bringing async/await back to Zig?

https://github.com/ziglang/zig/wiki/FAQ#what-is-the-status-o...

AndyKelley 4 hours ago [-]
I've got that stuff all figured out, should have some interesting updates for everyone over the next 2-3 months. Been redoing I/O from the ground up - mostly standard library work.
sgt 2 hours ago [-]
Is it really that important though, to have async/await? I mean, do Zig developers actually need it?
AndyKelley 2 hours ago [-]
Funny, this is the central question of the talk that I am working on for Systems Distributed in Amsterdam next week.
sgt 2 hours ago [-]
Hoping they make the talks available afterwards on YouTube or somewhere.
nasretdinov 32 minutes ago [-]
IMO there are plenty of cases where you don't need to sqeeze every little drop of performance by going all in with epoll/io_uring directly, but you still want to handle 10k+ concurrent connections more effectively than with threads
geodel 4 hours ago [-]
Reading the link, it seems to me async is never coming back or at least not till 2028.
fjnfndnf 4 hours ago [-]
That's just the backend swapped out, all the analysis and type passes are still present or is it also reducing verifications?

While a quick compile cycle is beneficial for productivity, this is only the case if it also includes fast tests

Thus wouldn't it be easier to just interpret zig for debug? That would also solve the issue of having to repeat the work for each target

flohofwoe 18 minutes ago [-]
> Thus wouldn't it be easier to just interpret zig for debug

The whole point of debug mode is debuggability, and hooking up such an interpreted Zig to a standard debugger like gbd or lldb probably isn't trivial, since those expect an executable with DWARF debug info.

ArtixFox 25 minutes ago [-]
Only backend has been swapped out. The tests will be fast too yes.

There is no real need to add an interpreter. Having custom backend s means that while currently it is being used for debug, far in future it might be able to compete with llvm for speed.

Adding an interpreter would be useless as u would still need to write a custom backend.

The problem is llvms slowness for debug and release.

WhereIsTheTruth 3 hours ago [-]
I said it for D and Nature, and every other languages that comes with its own backend, we all have a duty to support projects that tries to not depend on LLVM, compiler R&D has stagnated because of LLVM, far too many languages chose to depend on it, far too many people don't value fast iteration time, or perhaps they grew to not expect any better?

Fast iteration time with incremental compilation and binary patching, good debugging should be the expectation for new languages, not something niche or "too hard to do"

flohofwoe 8 minutes ago [-]
OTH LLVM caused an explosion in languages created by individuals which immediately had competitive performance and a wide range of supported platforms (Zig being one of them!).

The entire realtime rendering industry is essentially built on top of LLVM (or forks of LLVM), even Microsoft has switched its shader compiler to LLVM and is now starting to upstream their code.

The compiler infrastructure of most game consoles is Clang based (except Xbox which - so far - sticks to MSVC).

So all in all, LLVM has been a massive success, especially for bootstrapping new things.

pjmlp 1 hours ago [-]
Indeed, that is one of the few things I find positive on Go, being bootstraped, and not dependent on LLVM.
9d 8 hours ago [-]
> And we’re looking at aarch64 next - work that is expected to be accelerated thanks to our new Legalize pass.

Sorry, what?

garbagepatch 5 hours ago [-]
It seems to be Zig's equivalent to this part of LLVM: https://llvm.org/docs/GlobalISel/Legalizer.html
nektro 4 hours ago [-]
afaict its a new pass that transforms Air generated from Sema into Air understood by a particular backend, since theyre not all at the same level of maturity
WalterBright 3 hours ago [-]
I'm about halfway done writing an AArch64 backend for the dmd D compiler. Of course, the gdc and ldc compilers already support that.
BrouteMinou 9 hours ago [-]
[flagged]
9d 8 hours ago [-]
All languages are safe if you use them correctly, and unsafe if you don't.

But Rust is notorious for catching memory errors that could easily be exploited. Does Zig do this too now, or did I miss something?

ArtixFox 5 hours ago [-]
it doesnt catch temporal memory errors, but what it offers currently is still better than most of its competitors [defer, explicit allocators, custom allocators with integrated support for valgrind and friends,etc].

Due to how easy to parse and work with the language is, we might see a boom of static analyzers for zig. There are some quite interesting demos that can even go beyond basic borrow checking and straight into refinement types.

Zig's integrated build system will make it easy to add these tools into any project. Or maybe the zig compiler itself will integrate it.

the future is quite hopeful for zig but it is probably not one that is restricted to just borrow checking. I personally think it can go beyond and slowly become what C+FramaC or Ada is now for the critical systems world.

SkiFire13 3 hours ago [-]
> Due to how easy to parse and work with the language is, we might see a boom of static analyzers for zig.

Parsing is almost irrelevant for static analysis. The most important thing is how restricted are the semantics, which allow you to make assumptions and restrict the behaviour of the program to a small enough set that can be understood. From what I can see Zig is not much different than C on this front, and potentially is even harder to analyze due to `comptime`

ArtixFox 3 hours ago [-]
my bad for using the word "parse", "working with the language" is better to explain my intent, the language is minimal [unfortunately it follows llvm's semantics but its planned to change].

Comptime cannot affect runtime, it cannot do syscalls and cannot go on forever. Due to how limited it is, we can simply let it evaluate and then work with the code that is left.

It is still a better method than either writing text macros[which are also fine but are a pain to use] or using proc macros which can do syscalls and open up a whole new can of worms.

In languages that dont have any metaprogramming support, you must rely on some thing of a generator which is most probably not provided by the compiler vendor and verifying and trusting it is now the responsibility of the user.

Now you are left with things like template metaprogramming where higher and more complex techniques are not supported by the standard or any verification tools and you must trust the compiler to take the right decision regarding the code and having no tools support its verification.

Out of all the options for metaprogramming for a language that must also be verification friendly, comptime is probably one of the best solutions, if not the best.

Zig not being much different from C in this aspect is quite an unexpected compliment because of plethora of work done for its verification. Those same tools can be modified to be used for zig but you would not have to worry about much beyond logic and memory allocation verification [there exists a prototype for both].

VWWHFSfQ 7 hours ago [-]
I'm interested in Zig but kind of discouraged by the 30 pages of open issues mentioning "segfault" on their Github tracker. It's disheartening for a systems programming language being developed in the 21st century.
cornholio 2 hours ago [-]
Zig is not a memory safe language and does not attempt to prevent its users from shooting themselves in the leg; it tries to make those unsafe actions explicit and simple, unlike something like C++ that drowns you in complexity, but if you really want to do pointer wrangling and using memory after freeing it, zig allows you to do it.

This design philosophy should lead to countless segfaults that are the result of Zig working as designed. It also relegates Zig to the small niche of projects in modern programming where performance and developer productivity are more important than resilience and correctness.

enbugger 4 hours ago [-]
Since when segfaults are declared as the thing of the 20th century?
pjmlp 1 hours ago [-]
Since we discovered better ways of doing systems programming around early 1980's, that aren't tied to UNIX culture.
AndyKelley 7 hours ago [-]
I see 40 pages in rust-lang/rust. Are you sure this heuristic is measuring what you think it's measuring?
VWWHFSfQ 7 hours ago [-]
Oh I wasn't comparing to Rust. But just a quick glance between the two repos shows a pretty big difference between the nature of the "segfault" issues reported.

yikes... https://github.com/ziglang/zig/issues/23556

steveklabnik 7 hours ago [-]
Every mature compiler (heck, project of any kind) has thousands of bugs open. It’s just a poor metric.
9d 7 hours ago [-]
That's about size and popularity, not maturity.

Several very popular, small, mature projects have zero or few open issues.

(And several mature, huge and unpopular ones too.)

VWWHFSfQ 7 hours ago [-]
Yep and like I said, I'm interested in Zig. But it's still somewhat discouraging as a C replacement just because it seems to still have all the same problems but without the decades of tools and static analyzers to help out. But I'm keeping an eye on it.
Retro_Dev 3 hours ago [-]
It is my opinion that even if Zig were nothing more than a syntactical tweak of C, it would be preferable over C. C has a lot of legacy cruft that can't go away, and decades of software built with poor practices and habits. The status-quo in Zig is evolving to help mitigate these issues. One obvious example that sets Zig apart from C is error handling built into the language itself.
uecker 3 hours ago [-]
What specific legacy cruft does bother you? I think it is a strength of C that it evolves it slowly and code from decades ago will still run.

I also do not see how having decades of legacy software is holding anybody back doing new stuff in C in a better way. New C code can be very nice.

pjmlp 59 minutes ago [-]
So slowly that what allowed for Morris worm is still present in C23, and now everyone is rushing into hardware memory tagging as a solution instead.
pjmlp 1 hours ago [-]
Maybe people should take advantage D, Ada and Modula-2 are all part of GCC.
ArtixFox 6 hours ago [-]
Im pretty sure valgrind and friends can be used in zig.

Zig is still not 1.0, theres not much stability guarantees, making something like Frama-C, even tho it is possible is simply going to be soo much pain due to constant breakages as compared to something like C.

But it is not impossible and there have been demos of refinement type checkers https://github.com/ityonemo/clr

Beyond that, tools like antithesis https://antithesis.com/ exist that can be used for checking bugs. [ I dont have any experience with it. ]

stratts 6 hours ago [-]
What's the state of the art here?

Most of Zig's safety, or lack thereof, seems inherent to allowing manual memory management, and at least comparable to its "C replacement" peers (Odin, C3, etc).

pjmlp 57 minutes ago [-]
Comparable to Modula-2 and Object Pascal, hence why those languages ought to do better.

Otherwise it is like getting Modula-2 from 1978, but with C like syntax, because curly bracket must be.

ArtixFox 6 hours ago [-]
I guess formal verification tools? That is the peak that even rust is trying to reach with creusot and friends. Ada has support for it using Spark subset [which can use why3 or have you write the proofs in coq] Frama-C exists for C. Astree exists for C++ but i dont think lone developers can access it. But it is used in Boeing.
garbagepatch 5 hours ago [-]
No shame in waiting until 1.0. There's other production ready languages you can use right now so you can ignore Zig until then.
xmorse 3 hours ago [-]
that issue is ridiculous, what did you expect randomly increasing the pointer of an array?
xmorse 3 hours ago [-]
This could be the greatest programming language development of the last 10 years. Finally a language that compiles fast and is fast at runtime too.
0points 15 minutes ago [-]
> Finally a language that compiles fast and is fast at runtime too.

We been enjoying golang for the last decade and then some ;-)

pjmlp 55 minutes ago [-]
Turbo Pascal 5.5 for MS-DOS, one example out of many others from those days, running on lame IBM PCs.

If anything, it is a generation rediscovering what we have lost.

ArtixFox 3 hours ago [-]
theres a long way to go but its better than nothing!