Software-update: Ruby 3.3.0
Ruby is een programmeertaal om snel en makkelijk objectgeoriënteerd te programmeren. Het werd op 24 februari 1993 in het leven geroepen door Yukihiro 'Matz' Matsumoto en uitgebracht in 1995. Ruby is afgeleid van het Engelse woord voor robijn, een woordspeling op Perl. De auteur zegt dat hij Ruby gemaakt heeft om 'het principe van de minste verrassing' te volgen, waarmee hij bedoelt dat de taal vrij moet zijn van de angels en voetklemmen die andere talen teisteren. Enkele dagen geleden is versie 3.3.0 uitgekomen en hierin zijn de volgende veranderingen aangebracht:
Ruby 3.3.0 ReleasedWe are pleased to announce the release of Ruby 3.3.0. Ruby 3.3 adds a new parser named Prism, uses Lrama as a parser generator, adds a new pure-Ruby JIT compiler named RJIT, and many performance improvements especially YJIT.
PrismIntroduced the Prism parser as a default gemPrism is a portable, error tolerant, and maintainable recursive descent parser for the Ruby languagePrism is production ready and actively maintained, you can use it in place of RipperThere is extensive documentation on how to use PrismPrism is both a C library that will be used internally by CRuby and a Ruby gem that can be used by any tooling which needs to parse Ruby codeNotable methods in the Prism API are:Prism.parse(source) which returns the AST as part of a parse result objectPrism.parse_comments(source) which returns the commentsPrism.parse_success?(source) which returns true if there are no errorsYou can make pull requests or issues directly on the Prism repository if you are interested in contributingYou can now use ruby --parser=prism or RUBYOPT="--parser=prism" to experiment with the Prism compiler. Please note that this flag is for debugging only.
Introduced the Prism parser as a default gemPrism is a portable, error tolerant, and maintainable recursive descent parser for the Ruby languagePrism is a portable, error tolerant, and maintainable recursive descent parser for the Ruby language
Prism is a portable, error tolerant, and maintainable recursive descent parser for the Ruby languagePrism is production ready and actively maintained, you can use it in place of RipperThere is extensive documentation on how to use PrismPrism is both a C library that will be used internally by CRuby and a Ruby gem that can be used by any tooling which needs to parse Ruby codeNotable methods in the Prism API are:Prism.parse(source) which returns the AST as part of a parse result objectPrism.parse_comments(source) which returns the commentsPrism.parse_success?(source) which returns true if there are no errorsThere is extensive documentation on how to use PrismPrism is both a C library that will be used internally by CRuby and a Ruby gem that can be used by any tooling which needs to parse Ruby codeNotable methods in the Prism API are:Prism.parse(source) which returns the AST as part of a parse result objectPrism.parse_comments(source) which returns the commentsPrism.parse_success?(source) which returns true if there are no errors
There is extensive documentation on how to use PrismPrism is both a C library that will be used internally by CRuby and a Ruby gem that can be used by any tooling which needs to parse Ruby codeNotable methods in the Prism API are:Prism.parse(source) which returns the AST as part of a parse result objectPrism.parse_comments(source) which returns the commentsPrism.parse_success?(source) which returns true if there are no errorsPrism.parse(source) which returns the AST as part of a parse result objectPrism.parse_comments(source) which returns the commentsPrism.parse_success?(source) which returns true if there are no errors
Prism.parse(source) which returns the AST as part of a parse result objectPrism.parse_comments(source) which returns the commentsPrism.parse_success?(source) which returns true if there are no errorsYou can make pull requests or issues directly on the Prism repository if you are interested in contributingYou can now use ruby --parser=prism or RUBYOPT="--parser=prism" to experiment with the Prism compiler. Please note that this flag is for debugging only.Use Lrama instead of BisonReplace Bison with Lrama LALR parser generator [Feature #19637]If you have interest, please see The future vision of Ruby ParserLrama internal parser is replaced with LR parser generated by Racc for maintainabilityParameterizing Rules (?, *, +) are supported, it will be used in Ruby parse.y
Replace Bison with Lrama LALR parser generator [Feature #19637]If you have interest, please see The future vision of Ruby ParserLrama internal parser is replaced with LR parser generated by Racc for maintainabilityParameterizing Rules (?, *, +) are supported, it will be used in Ruby parse.yIf you have interest, please see The future vision of Ruby ParserLrama internal parser is replaced with LR parser generated by Racc for maintainabilityParameterizing Rules (?, *, +) are supported, it will be used in Ruby parse.y
If you have interest, please see The future vision of Ruby ParserLrama internal parser is replaced with LR parser generated by Racc for maintainabilityParameterizing Rules (?, *, +) are supported, it will be used in Ruby parse.yYJITMajor performance improvements over Ruby 3.2Support for splat and rest arguments has been improved.Registers are allocated for stack operations of the virtual machine.More calls with optional arguments are compiled. Exception handlers are also compiled.Unsupported call types and megamorphic call sites no longer exit to the interpreter.Basic methods like Rails #blank? and specialized #present? are inlined.Integer#*, Integer#!=, String#!=, String#getbyte, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, and Module#=== are specially optimized.Compilation speed is now slightly faster than Ruby 3.2.Now more than 3x faster than the interpreter on Optcarrot!Significantly improved memory usage over Ruby 3.2Metadata for compiled code uses a lot less memory.--yjit-call-threshold is automatically raised from 30 to 120 when the application has more than 40,000 ISEQs.--yjit-cold-threshold is added to skip compiling cold ISEQs.More compact code is generated on Arm64.Code GC is now disabled by default--yjit-exec-mem-size is treated as a hard limit where compilation of new code stops.No sudden drops in performance due to code GC. Better copy-on-write behavior on servers reforking with Pitchfork.You can still enable code GC if desired with --yjit-code-gcAdd RubyVM::YJIT.enable that can enable YJIT at run-timeYou can start YJIT without modifying command-line arguments or environment variables. Rails 7.2 will enable YJIT by default using this method.This can also be used to enable YJIT only once your application is done booting. --yjit-disable can be used if you want to use other YJIT options while disabling YJIT at boot.More YJIT stats are available by defaultyjit_alloc_size and several more metadata-related stats are now available by default.ratio_in_yjit stat produced by --yjit-stats is now available in release builds, a special stats or dev build is no longer required to access most stats.Add more profiling capabilities--yjit-perf is added to facilitate profiling with Linux perf.--yjit-trace-exits now supports sampling with --yjit-trace-exits-sample-rate=NMore thorough testing and multiple bug fixes
Major performance improvements over Ruby 3.2Support for splat and rest arguments has been improved.Registers are allocated for stack operations of the virtual machine.More calls with optional arguments are compiled. Exception handlers are also compiled.Unsupported call types and megamorphic call sites no longer exit to the interpreter.Basic methods like Rails #blank? and specialized #present? are inlined.Integer#*, Integer#!=, String#!=, String#getbyte, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, and Module#=== are specially optimized.Compilation speed is now slightly faster than Ruby 3.2.Now more than 3x faster than the interpreter on Optcarrot!Support for splat and rest arguments has been improved.Registers are allocated for stack operations of the virtual machine.More calls with optional arguments are compiled. Exception handlers are also compiled.Unsupported call types and megamorphic call sites no longer exit to the interpreter.Basic methods like Rails #blank? and specialized #present? are inlined.Integer#*, Integer#!=, String#!=, String#getbyte, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, and Module#=== are specially optimized.Compilation speed is now slightly faster than Ruby 3.2.Now more than 3x faster than the interpreter on Optcarrot!
Support for splat and rest arguments has been improved.Registers are allocated for stack operations of the virtual machine.More calls with optional arguments are compiled. Exception handlers are also compiled.Unsupported call types and megamorphic call sites no longer exit to the interpreter.Basic methods like Rails #blank? and specialized #present? are inlined.Integer#*, Integer#!=, String#!=, String#getbyte, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, and Module#=== are specially optimized.Compilation speed is now slightly faster than Ruby 3.2.Now more than 3x faster than the interpreter on Optcarrot!Significantly improved memory usage over Ruby 3.2Metadata for compiled code uses a lot less memory.--yjit-call-threshold is automatically raised from 30 to 120 when the application has more than 40,000 ISEQs.--yjit-cold-threshold is added to skip compiling cold ISEQs.More compact code is generated on Arm64.Metadata for compiled code uses a lot less memory.--yjit-call-threshold is automatically raised from 30 to 120 when the application has more than 40,000 ISEQs.--yjit-cold-threshold is added to skip compiling cold ISEQs.More compact code is generated on Arm64.
Metadata for compiled code uses a lot less memory.--yjit-call-threshold is automatically raised from 30 to 120 when the application has more than 40,000 ISEQs.--yjit-cold-threshold is added to skip compiling cold ISEQs.More compact code is generated on Arm64.Code GC is now disabled by default--yjit-exec-mem-size is treated as a hard limit where compilation of new code stops.No sudden drops in performance due to code GC. Better copy-on-write behavior on servers reforking with Pitchfork.You can still enable code GC if desired with --yjit-code-gc--yjit-exec-mem-size is treated as a hard limit where compilation of new code stops.No sudden drops in performance due to code GC. Better copy-on-write behavior on servers reforking with Pitchfork.You can still enable code GC if desired with --yjit-code-gc
--yjit-exec-mem-size is treated as a hard limit where compilation of new code stops.No sudden drops in performance due to code GC. Better copy-on-write behavior on servers reforking with Pitchfork.You can still enable code GC if desired with --yjit-code-gcAdd RubyVM::YJIT.enable that can enable YJIT at run-timeYou can start YJIT without modifying command-line arguments or environment variables. Rails 7.2 will enable YJIT by default using this method.This can also be used to enable YJIT only once your application is done booting. --yjit-disable can be used if you want to use other YJIT options while disabling YJIT at boot.You can start YJIT without modifying command-line arguments or environment variables. Rails 7.2 will enable YJIT by default using this method.This can also be used to enable YJIT only once your application is done booting. --yjit-disable can be used if you want to use other YJIT options while disabling YJIT at boot.
You can start YJIT without modifying command-line arguments or environment variables. Rails 7.2 will enable YJIT by default using this method.This can also be used to enable YJIT only once your application is done booting. --yjit-disable can be used if you want to use other YJIT options while disabling YJIT at boot.More YJIT stats are available by defaultyjit_alloc_size and several more metadata-related stats are now available by default.ratio_in_yjit stat produced by --yjit-stats is now available in release builds, a special stats or dev build is no longer required to access most stats.yjit_alloc_size and several more metadata-related stats are now available by default.ratio_in_yjit stat produced by --yjit-stats is now available in release builds, a special stats or dev build is no longer required to access most stats.
yjit_alloc_size and several more metadata-related stats are now available by default.ratio_in_yjit stat produced by --yjit-stats is now available in release builds, a special stats or dev build is no longer required to access most stats.Add more profiling capabilities--yjit-perf is added to facilitate profiling with Linux perf.--yjit-trace-exits now supports sampling with --yjit-trace-exits-sample-rate=N--yjit-perf is added to facilitate profiling with Linux perf.--yjit-trace-exits now supports sampling with --yjit-trace-exits-sample-rate=N
--yjit-perf is added to facilitate profiling with Linux perf.--yjit-trace-exits now supports sampling with --yjit-trace-exits-sample-rate=NMore thorough testing and multiple bug fixesRJITIntroduced a pure-Ruby JIT compiler RJIT and replaced MJIT.RJIT supports only x86-64 architecture on Unix platforms.Unlike MJIT, it doesn’t require a C compiler at runtime.RJIT exists only for experimental purposes.You should keep using YJIT in production.If you are interested in developing JIT for Ruby, please check out k0kubun’s presentation on Day 3 of RubyKaigi.
Introduced a pure-Ruby JIT compiler RJIT and replaced MJIT.RJIT supports only x86-64 architecture on Unix platforms.Unlike MJIT, it doesn’t require a C compiler at runtime.RJIT supports only x86-64 architecture on Unix platforms.Unlike MJIT, it doesn’t require a C compiler at runtime.
RJIT supports only x86-64 architecture on Unix platforms.Unlike MJIT, it doesn’t require a C compiler at runtime.RJIT exists only for experimental purposes.You should keep using YJIT in production.You should keep using YJIT in production.
You should keep using YJIT in production.If you are interested in developing JIT for Ruby, please check out k0kubun’s presentation on Day 3 of RubyKaigi.M:N thread schedulerM:N thread scheduler was introduced. [Feature #19842]M Ruby threads are managed by N native threads (OS threads) so the thread creation and management cost are reduced.It can break C-extension compatibility so that M:N thread scheduler is disabled on the main Ractor by default.RUBY_MN_THREADS=1 environment variable enables M:N threads on the main Ractor.M:N threads are always enabled on non-main Ractors.RUBY_MAX_CPU=n environment variable sets maximum number of N (maximum number of native threads). The default value is 8.Since only one Ruby thread per Ractor can run at the same time, the number of native threads will be used, which is the smaller of the number specified in RUBY_MAX_CPU and the number of running Ractors. So that single Ractor applications (most of applications) will only use 1 native thread.To support blocking operations, more than N native threads can be used.
M:N thread scheduler was introduced. [Feature #19842]M Ruby threads are managed by N native threads (OS threads) so the thread creation and management cost are reduced.It can break C-extension compatibility so that M:N thread scheduler is disabled on the main Ractor by default.RUBY_MN_THREADS=1 environment variable enables M:N threads on the main Ractor.M:N threads are always enabled on non-main Ractors.RUBY_MAX_CPU=n environment variable sets maximum number of N (maximum number of native threads). The default value is 8.Since only one Ruby thread per Ractor can run at the same time, the number of native threads will be used, which is the smaller of the number specified in RUBY_MAX_CPU and the number of running Ractors. So that single Ractor applications (most of applications) will only use 1 native thread.To support blocking operations, more than N native threads can be used.M Ruby threads are managed by N native threads (OS threads) so the thread creation and management cost are reduced.It can break C-extension compatibility so that M:N thread scheduler is disabled on the main Ractor by default.RUBY_MN_THREADS=1 environment variable enables M:N threads on the main Ractor.M:N threads are always enabled on non-main Ractors.RUBY_MAX_CPU=n environment variable sets maximum number of N (maximum number of native threads). The default value is 8.Since only one Ruby thread per Ractor can run at the same time, the number of native threads will be used, which is the smaller of the number specified in RUBY_MAX_CPU and the number of running Ractors. So that single Ractor applications (most of applications) will only use 1 native thread.To support blocking operations, more than N native threads can be used.
M Ruby threads are managed by N native threads (OS threads) so the thread creation and management cost are reduced.It can break C-extension compatibility so that M:N thread scheduler is disabled on the main Ractor by default.RUBY_MN_THREADS=1 environment variable enables M:N threads on the main Ractor.M:N threads are always enabled on non-main Ractors.RUBY_MN_THREADS=1 environment variable enables M:N threads on the main Ractor.M:N threads are always enabled on non-main Ractors.
RUBY_MN_THREADS=1 environment variable enables M:N threads on the main Ractor.M:N threads are always enabled on non-main Ractors.RUBY_MAX_CPU=n environment variable sets maximum number of N (maximum number of native threads). The default value is 8.Since only one Ruby thread per Ractor can run at the same time, the number of native threads will be used, which is the smaller of the number specified in RUBY_MAX_CPU and the number of running Ractors. So that single Ractor applications (most of applications) will only use 1 native thread.To support blocking operations, more than N native threads can be used.Since only one Ruby thread per Ractor can run at the same time, the number of native threads will be used, which is the smaller of the number specified in RUBY_MAX_CPU and the number of running Ractors. So that single Ractor applications (most of applications) will only use 1 native thread.To support blocking operations, more than N native threads can be used.
Since only one Ruby thread per Ractor can run at the same time, the number of native threads will be used, which is the smaller of the number specified in RUBY_MAX_CPU and the number of running Ractors. So that single Ractor applications (most of applications) will only use 1 native thread.To support blocking operations, more than N native threads can be used.Performance improvementsdefined?(@ivar) is optimized with Object Shapes.Name resolution such as Socket.getaddrinfo can now be interrupted (in environments where pthreads are available). [Feature #19965]Several performance improvements to the Garbage CollectorYoung objects referenced by old objects are no longer immediately promoted to the old generation. This significantly reduces the frequency of major GC collections. [Feature #19678]A new REMEMBERED_WB_UNPROTECTED_OBJECTS_LIMIT_RATIO tuning variable was introduced to control the number of unprotected objects cause a major GC collection to trigger. The default is set to 0.01 (1%). This significantly reduces the frequency of major GC collection. [Feature #19571]Write Barriers were implemented for many core types that were missing them, notably Time, Enumerator, MatchData, Method, File::Stat, BigDecimal and several others. This significantly reduces minor GC collection time and major GC collection frequency.Most core classes are now using Variable Width Allocation, notably Hash, Time, Thread::Backtrace, Thread::Backtrace::Location, File::Stat, Method. This makes these classes faster to allocate and free, use less memory and reduce heap fragmentation.Support for weak references has been added to the garbage collector. [Feature #19783]
defined?(@ivar) is optimized with Object Shapes.Name resolution such as Socket.getaddrinfo can now be interrupted (in environments where pthreads are available). [Feature #19965]Several performance improvements to the Garbage CollectorYoung objects referenced by old objects are no longer immediately promoted to the old generation. This significantly reduces the frequency of major GC collections. [Feature #19678]A new REMEMBERED_WB_UNPROTECTED_OBJECTS_LIMIT_RATIO tuning variable was introduced to control the number of unprotected objects cause a major GC collection to trigger. The default is set to 0.01 (1%). This significantly reduces the frequency of major GC collection. [Feature #19571]Write Barriers were implemented for many core types that were missing them, notably Time, Enumerator, MatchData, Method, File::Stat, BigDecimal and several others. This significantly reduces minor GC collection time and major GC collection frequency.Most core classes are now using Variable Width Allocation, notably Hash, Time, Thread::Backtrace, Thread::Backtrace::Location, File::Stat, Method. This makes these classes faster to allocate and free, use less memory and reduce heap fragmentation.Support for weak references has been added to the garbage collector. [Feature #19783]Young objects referenced by old objects are no longer immediately promoted to the old generation. This significantly reduces the frequency of major GC collections. [Feature #19678]A new REMEMBERED_WB_UNPROTECTED_OBJECTS_LIMIT_RATIO tuning variable was introduced to control the number of unprotected objects cause a major GC collection to trigger. The default is set to 0.01 (1%). This significantly reduces the frequency of major GC collection. [Feature #19571]Write Barriers were implemented for many core types that were missing them, notably Time, Enumerator, MatchData, Method, File::Stat, BigDecimal and several others. This significantly reduces minor GC collection time and major GC collection frequency.Most core classes are now using Variable Width Allocation, notably Hash, Time, Thread::Backtrace, Thread::Backtrace::Location, File::Stat, Method. This makes these classes faster to allocate and free, use less memory and reduce heap fragmentation.Support for weak references has been added to the garbage collector. [Feature #19783]
Young objects referenced by old objects are no longer immediately promoted to the old generation. This significantly reduces the frequency of major GC collections. [Feature #19678]A new REMEMBERED_WB_UNPROTECTED_OBJECTS_LIMIT_RATIO tuning variable was introduced to control the number of unprotected objects cause a major GC collection to trigger. The default is set to 0.01 (1%). This significantly reduces the frequency of major GC collection. [Feature #19571]Write Barriers were implemented for many core types that were missing them, notably Time, Enumerator, MatchData, Method, File::Stat, BigDecimal and several others. This significantly reduces minor GC collection time and major GC collection frequency.Most core classes are now using Variable Width Allocation, notably Hash, Time, Thread::Backtrace, Thread::Backtrace::Location, File::Stat, Method. This makes these classes faster to allocate and free, use less memory and reduce heap fragmentation.Support for weak references has been added to the garbage collector. [Feature #19783]
Source:
Tweakers.net