{"id":13608016,"url":"https://github.com/sviperll/argo","last_synced_at":"2025-10-26T20:31:55.303Z","repository":{"id":138839244,"uuid":"41541197","full_name":"sviperll/argo","owner":"sviperll","description":"Pragmatic functional programming language","archived":false,"fork":false,"pushed_at":"2016-12-26T13:00:09.000Z","size":32,"stargazers_count":10,"open_issues_count":0,"forks_count":0,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-02-01T00:13:53.320Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"C","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sviperll.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2015-08-28T10:19:52.000Z","updated_at":"2022-07-29T07:16:19.000Z","dependencies_parsed_at":"2023-03-13T10:52:47.793Z","dependency_job_id":null,"html_url":"https://github.com/sviperll/argo","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sviperll%2Fargo","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sviperll%2Fargo/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sviperll%2Fargo/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sviperll%2Fargo/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sviperll","download_url":"https://codeload.github.com/sviperll/argo/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":238397170,"owners_count":19465129,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T19:01:23.577Z","updated_at":"2025-10-26T20:31:50.043Z","avatar_url":"https://github.com/sviperll.png","language":"C","readme":"Argo\n====\n\nPragmatic functional programming language\n\nHere are some of my thought about possible language design.\nThere is no language yet, but The journey is the destination.\n\nRoadmap\n-------\n\n * Become self-hosted as fast as possible to have first-hand experience with language.\n\n * First write parser and transpiler into Haskell without any type-checking or processing.\n\n * Implement parser and transpiler in language itself.\n\n * Add type-checking.\n\n * Compile to STG-language.\n   We can generate Rust source code to build in-memory STG-expressions.\n   This source code will serve as an external format\n   for compiled code.\n   This will allow to avoid complications with\n   definition of some independent external format like it's serialization and validation.\n\n * Write STG-interpreter in Rust.\n\n * Experiment with STG-interpreter to finally define\n   external format for compiled code that will allow validation\n   and possible alternative implementations.\n\n * Try to provide IDE support with syntax-highlighting and rename refactoring\n\n * Implement profiling/hot-spot detection in interpreter.\n\n * Implement Just-in-time optimization (Just-in-time supercompilation).\n\n * Implement Just-in-time compilation into native language.\n\nDesign\n------\n\n### General design guidelines ###\n\nTooling and libraries proved to be critical components of practical language success.\nTooling and libraries need language stability. Java is prime example of highly developed ecosystem.\nAnd it's firmly based on promise of backward compatibility and gradual changes.\n\nGood language should be simple and minimalistic. Language shouldn't have features that can be removed.\nMinimalistic language is easy to learn and it puts little mental burden during programming.\nMany successful languages are really minimalistic.\nLanguage simplicity is still subjective, so it comes down to personal judgment and taste.\nI consider Scheme, C, Standard ML to be really minimalistic languages.\nJava, Haskell 98 and Javascript and, maybe Python and Rust are more complex, but reasonably minimal.\nOCaml, GHC Haskell, Command Lisp and Ruby are reasonably complex.\nAnd at last C++, in my opinion, is unreasonably complex.\n\nBut language shouldn't stagnate. Language changes should be possible.\nThis means that language's changes should be proactively planned for.\nEvery feature should be small and simple.\nEvery feature should be as general as possible when simplicity is not sacrificed.\nBut most importantly language features shouldn't block future extensions.\nA language feature should be allowed to become more general later.\nA language feature shouldn't steal syntax that can be used for some future extension.\nThe problem of identifying set of possible extensions is ill defined, but nevertheless\nsome analysis should be performed anyway following personal judgment, taste and experience.\n\nReadability should be valued more than expressiveness, because,\nyou know, code is read more frequently than written.\n\n### Purity, lazy evaluation and non-strict semantics ###\n\nIt seems that laziness is the main corner stone when discussing functional languages design.\nMain counter point seems to be memory-leaks.\nAs I see it laziness' discussions lack balanced comparison of trade-offs and benefits.\n\nPurity on the other hand seems to be universally prised and accepted.\nModularity and referential transparency are highly valued and desired.\n\nAs been [stated](http://research.microsoft.com/en-us/um/people/simonpj/papers/history-of-haskell/history.pdf) laziness is the only practical mechanism that paved a way to purity.\n\nLaziness is [required](https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf) to achieve really high level of modularity\nand referential transparency.\n\nMoreover laziness is one of main mechanisms that allowed to simplify language (Haskell)\nto unprecedented minimalism. Laziness allows to\n\n * Get rid of functions of `() -\u003e a` (unit to some other type) type.\n   There is no need for distinct values and \"parameterless\" functions.\n\n * Get rid of special treatment of recursion/co-data etc.\n\n * Use `undefined` values to test and gradually define program on the course of program development.\n\n * Laziness allowed to move really far without using macros (or other meta-programming tools).\n   Most needs filled by common macros' usage in other languages are [filled](https://www.google.com/search?q=macros+vs+lazy+evaluation\u0026ie=utf-8\u0026oe=utf-8) by most basic tools of\n   lazy language.\n\nMemory-leaks on are not eradicated when strict language is used! *They can happen anyway.*\nLanguages deal with memory-leaks by providing profilers and other tools to investigate leaks' causes.\nHaskell provide similar memory profiling tools as other strict languages.\n\nAll in all I think that laziness' benefits outweigh it's problems. Practical functional language should be\n\n * Pure\n * Non-strict *by default*\n * Modular\n\n### Hindley-Milner type-system ###\n\nHindley-Milner type-system is [prised](citation-needed) as a sweet-spot of practical type-systems.\nOn the other hand pure Hindley-Milner type-system is never used.\n\nIt should be noted what makes Hindley-Milner so good not from theoretical type-systems' point of view, but\nfrom user's interface and user's experience point of view.\n\nHindley-Milner provides two separate languages:\n\n 1. expression language\n 2. type language\n\nExpression language fully defines program behavior itself, but type language complements expressions\nto help to get rid of erroneous programs.\n\n * Type language in Hindley-Milner is really minimal and lightweight.\n\n * Hindley-Milner type language defines *statically enforcible* contracts for program behavior.\n\n * Type language is fully separate from expression language.\n\nType language separation allows type-level reasoning for programmers.\nThey can fully ignore expressions and reason about program behavior using type-level language only.\nProgrammers can specify high level program behavior and properties using type-language only.\nType language only can be used as a design tool (like UML) without actual code (implementation).\n\nThis two-level structure with two separate languages seems really valuable.\nThis structure is severely damaged in dependently-typed languages.\nBut it seems that nevertheless dependently-typed languages tries to preserve it.\nMy position is that type-expression language separation is really valuable from user experience point of view.\n\nControversial topic here is Haskell's scoped type variables (ScopedTypeVariables extension).\nAt first Haskell assumes that type signature and value-expression\nare totally separate for every declared name, but with scoped type variables they are not separate at all.\nYou can reference type-variables from type signature in value-expression however type signature and expression\nare clearly syntacticly separated without any hint about their connection.\nNevertheless ScopedTypeVariables are seldom used but are really important when they are used.\nWe may provide some kind of special syntax to use when scoped type variables are needed.\n\n````\nmkpair1 :: \u003ca b\u003e a -\u003e b -\u003e (a, b) where\n    mkpair1 aa bb = (ida aa, bb) where {\n        ida :: a -\u003e a;\n        ida = id;\n    };\n````\n\n### Type system extensions ###\n\nLots of very practical type-system extensions are provided by most functional languages.\nThis process seems to be constant boundary pushing trying to provide most practical benefits without\nsacrificing type-system usability.\n\nHaskell is positioned as an academic playground in this process.\nType-system extension trade-offs and benefits are not easily envisioned.\nHaskell provides an environment to test and find best trade-offs.\n\nHaskell-prime was created to provide stable Haskell-flavor, but it seems [abandoned](citation needed) by it's community.\nIt seems that Haskell community fully embraced the stream of minor backward incompatible changes.\n\nType-system extensions should be adopted when they seem to be fully understood and\ntested through really explicit opt-in mechanisms that will prevent them to be generally adopted.\nAdoption of experimental type-extensions creates language dialects and\nposes real difficulties for novices trying to learn language.\nLanguage extensions as present in GHC create large and complex language and\nremove possibility to analyze language as a whole.\nThese distract investments into ecosystem and prevents fast ecosystem growth.\n\nType system extension mechanism can be provided for language but it should bring the promise of breakage.\nYour library is guaranteed not to work with next release of a compiler if you use type-system extension.\nThis promise of instability should prevent general adoption of extensions.\nWhen extension proves to be useful it will stimulate a community backed movement to adopt and stabilize\nextension as a language feature.\n\nRust language provides another way to ensure that experimental features are not adopted.\nRust provides both stable compiler and unstable nightly builds.\nExperimental language features are not accessible from stable compiler.\nThe only way to get experimental language features is to become compiler's tester which seams a reasonable contract.\n\nWe can bind experimental language features to specific compiler version to ensure breakage.\nIf we do this than library that uses experimental feature will be unstable.\nIt will surely break with next compiler version.\nRust-way seems to be better as it requires some special compiler version for unstable libraries and thus\nclearly marks unstable libraries.\n\n### Modules ###\n\nInner modules should be provided. By inner modules I mean modules enclosed into parent module\nand defined in the same file alone with parent-module.\nHaskell decision to have top-level modules only leads to bad program structure.\nShort modules are awkward to use since it leads to too many short files.\nBig modules leads to loose structure.\n\nInner modules bring the problem of file/compilation-unit lookup. Where module should be defined?\nIn the same file as parent-module or in subdirectory named as parent-module?\nWhat if both are present? Case when both definitions are present should be compilation error,\nbut how can it be detected? Well working solution can be found in Java as it's separation between\npackages and classes. Same separation can be introduced.\nEvery module should be defined in some package and name clash between module and package should be\ncompilation error.\n\nModule exports should always have explicit type-signatures.\nThis seems counter intuitive since there is that sentiment that Hindley-Milner-based type systems are cool since\nmost types can be inferred by the compiler, so your are not required to provide type signatures.\n\nThere are counter-points so.\nProviding type-signatures for exported symbols is established Haskell-practice.\nHaving type-signatures allows circular module dependencies without any special mechanisms.\nAnd circular module dependencies are used in Haskell anyway with some obscure `.hs-boot` files.\nComplex types seem to be a design problem.\nYou should probably not export symbol with overly complicated type or\nyou should export some specialization of it, which will allow feature refinements.\n\nQualified imports should be default.\nQualified imports are established Haskell practice.\nQualified imports greatly enhance code readability.\nUnqualified imports should be restricted to save readability.\nFor example we can restrict unqualified import to single module.\nThis will allow to easily identify this module when we see unqualified name.\nSome mechanism should be provided to make qualified imports easy to use.\nJava-like imports can be adopted as a starting point. This means that\n\n````\n    import util.data.Map\n````\n\nwill import names from util.data.Map module, that can be referenced with `Map.` prefix\n\n````\n    Map.lookup \"key1\" mymap\n````\n\nHaskell qualified imports are bad in this respect since it's usually a burden to type\n\n````\n    import qualified Data.Map as Map\n````\n\n### Infix operators ###\n\nInfix operators should be restricted. There are kind of two extremes with this.\nHaskell provides full custom operators. Java provides no operator overloading at all.\nJava's position is that alphabetical names are always more descriptive than operators,\nhence they should increase readability.\nBut counter point is that operators greatly enhance readability at the point when\nyou are familiar with them.\nHaskell's problem is that it is hard to become familiar with operators.\nAnd this is a real problem for language beginners.\nIt is hard to learn all relative operators' priorities.\nWhen you see `Data.Lens` source first time it's a shock, even\nif you know Haskell as a language reasonably well.\n\nI propose to provide special `operatorset` files that will concisely define all\noperators with their types and relative priorities (and associativity).\n\nWith C-family languages you can inspect documentation to find nice table of operators with their relative precedence\nand quickly become fluent with complex expressions like if-conditions.\n\n`operatorset` files should serve this operator-table purpose.\nYou can see it and learn it and than you can easily read code with these operators.\n\nThis means that every compilation-unit (module) can explicitly import single `operatorset`\nand use operators from this set.\n\nWhen multiple `operatorsets` are required this should hint that maybe it's time to split module\nto reduce cognitive load.\n\nSome standard `operatorset` can be defined to be implicitly available.\nBut custom operators should require explicit import of `operatorset` and\nexamining `operatorset` source should be enough to learn all operators and their precedence.\n\n### Type classes ###\n\nType classes bring a can of worms with them.\nOrphan instances should be eradicated, but it's already a stable Haskell practice not to define\norphan instances. The way to codify already established Haskell-practice is to only allow\ninstance declarations along with data-type declarations. This by definition rules out many\nmulti-parameter type-classes definitions. You can't have class `Convert a b` to allow conversions\nfrom some type to another, since it's impossible to define where instance declarations are allowed\nfor such type-class. Should you define `Convert Int Text` along with `Int` type or along with `String` type.\n\nHere is imaginary syntax to implement Haskell's Ord type-class.\nMeanwhile `class` keyword seems too confusing from other languages' perspective and it may be better to use\n`interface` or `trait`\n\n````\n    interface Ord extends Eq {\n        compare :: self -\u003e self -\u003e Order;\n\n        equals a b =\n            case compare a b:\n                EQ -\u003e True\n                _ -\u003e False;\n    }\n````\n\nWe can provide multi-parameter type classes when other parameters depends on main *self* parameter\n\n````\n    interface \u003ck v\u003e Map k v {\n        lookup :: k -\u003e self -\u003e Maybe v;\n    }\n````\n\nHere implicit functional dependency is present. Haskell equivalent is\n\n````\nclass Map self k v | self -\u003e k, self -\u003e v where\n    lookup :: self -\u003e k -\u003e Maybe v\n````\n\nHaskell type-classes have one more flaw. They are not extensible.\nIt's not possible to slip in `Applicative` class as a super class for `Monad` without breaking all the code.\nAnd there are possibilities of type-class hierarchy evolution.\nThis is known as [Numeric tower](https://en.wikipedia.org/wiki/Numerical_tower) in Lisp world and\nHaskell has it's own Monad tower.\nMonad tower evolution seems inevitable (see [The Other Prelude](https://wiki.haskell.org/The_Other_Prelude) and [AMP](https://wiki.haskell.org/Functor-Applicative-Monad_Proposal) Haskell-proposals).\nType classes should be extensible without client code breakage.\nExtensibility of type-classes requires constraints on type class definitions and implementations.\n\nAs a start we can allow single parameter type-classes only.\nType-class instances to be defined with it's data-type only.\nAs an extension mechanism we can allow type-class to provide implementation for required type-classes.\nAnother mechanism is direct extension. Extension means that type class is not only required, but\nmakes all methods of extended class accessible as if they are members of current class.\n\nWith Monad-Applicative example you can make `Monad` extend `Applicative`\nIn such setting you can move `return` method from `Monad` to `Applicative` without client code breakage.\n\nBefore making `Applicative` superclass of `Monad`:\n\n```\n    interface Applicative extends Functor {\n        apply :: \u003ca b\u003e self (a -\u003e b) -\u003e self a -\u003e self b;\n        pure :: \u003ca\u003e a -\u003e self a;\n        then :: \u003ca b\u003e self a -\u003e self b -\u003e self b;\n\n        then a b = apply (map (const id) a) b\n    }\n\n    interface Monad {\n        return :: \u003ca\u003e a -\u003e self a;\n        bind :: \u003ca b\u003e self a -\u003e (a -\u003e self b) -\u003e self b;\n        then :: \u003ca b\u003e self a -\u003e self b -\u003e self b;\n\n        then a b = bind a (\\_ -\u003e b)\n    }\n```\n\nAfter making `Applicative` superclass of `Monad`:\n\n```\n    interface Applicative extends Functor {\n        apply :: \u003ca b\u003e self (a -\u003e b) -\u003e self a -\u003e self b;\n        pure :: \u003ca\u003e a -\u003e self a;\n        return :: \u003ca\u003e a -\u003e self a;\n        then :: \u003ca b\u003e self a -\u003e self b -\u003e self b;\n\n        then a b = apply (map (const id) a) b\n        pure = return\n        return = pure\n    }\n\n    interface Monad extends Applicative {\n        bind :: \u003ca b\u003e self a -\u003e (a -\u003e self b) -\u003e self b;\n        join :: \u003ca\u003e self (self a) -\u003e self a\n\n        bind a f = join (fmap f a)\n        join a = bind a id\n    }\n```\n\nEvery implementation(instance) that defined `Monad` before introducing `Applicative` as it's *extended* type-class\nwill work after the change and will automatically implement Applicative.\n\n### Data types ###\n\nThere are list of problems with data-types syntax\n(see [here](http://www.the-magus.in/Publications/notation.pdf) for example).\nMoreover GADT's proves to be universally [accepted](OCaml GADT) type-system feature.\nIt's seems reasonable to always use GADT-syntax (as provided by GHC) even if GADTs are not allowed\nas a type-level feature, because GADT-syntax is more explicit and clear than legacy grammar-like declaration.\n\nAnother problem is data-type namespaces. It use usually stated as a problem that Haskell's\nrecords can define conflicting accessors.\n\nHaving inner-modules we can always provide new namespace for data-types. List module can be defined like this:\n\n````\nmodule A:\n    module PersonUtil:\n        data Person:\n            public constructor person :: String -\u003e String -\u003e Person\n        public name :: Person -\u003e String\n        name (person? n a) = n\n\n    module CityUtil:\n        data City:\n            public constructor city :: String -\u003e (Int, Int) -\u003e City\n\n        public name :: City -\u003e String\n        name (city? n coords) = n\n````\n\nIt is already an established Haskell practice to name types with the same name as module. Like here\n\n````\nmodule Data.Text (...) where\n    data Text = ...\n    ...\n````\n\nIt is [recommended](https://github.com/chrisdone/haskell-style-guide) to import this module like this\n\n````\nimport qualified Data.Text as Text\nimport Data.Text (Text)\n````\n\nWe can and probably should build it right into the language and make it possible to define data types\nwith their namespaces.\n\n````\nmodule A:\n    data module Person:\n        public constructor person :: String -\u003e String -\u003e Person\n        public name :: Person -\u003e String\n        name (person? n a) = n\n\n    data module City:\n        public constructor city :: String -\u003e (Int, Int) -\u003e City\n\n        public name :: City -\u003e String\n        name (city? n coords) = n\n````\n\nSo when we import such a module:\n\n````\n    import pkg.A.Person\n````\n\nwe can reference `Person` type without any additional ceremony, and we can reference all values\nfrom Person module with `Person.` prefix. Like this:\n\n````\n    hello :: Person -\u003e String\n    hello p = \"Hello, \" ++ Person.name p\n````\n\n### Dependent types ###\n\nHaskell has proved that people will push for dependent types.\nHaskell proved that people like Haskell and try to bolt dependent types on it.\nWe can see two sides of it.\n\nFirst is that new dependently typed languages are created\ndirectly inspired by Haskell: Agda, Idris.\n\nSecond is that GHC provides type-system extensions to bring type-system closer to dependently-typed language.\nWe now have type-level literals and automatic promotion of data constructors into type level.\n\nWith this extensions GHC has now two complex and distinct computational languages.\nOne for computations with values and another as powerful as first for computations with types.\n\nBut do we really need dependent types at all?\nThere are [some](http://www.brics.dk/RS/01/10/BRICS-RS-01-10.pdf)\n[evidence](http://jadpole.github.io/rust/typechecked-matrix/) that most benefits are provided\nby type-level (string and numeric) literals and not by dependent types per se.\n\nIt seems still a research topic.\nAnd research topic is something you should try to avoid when designing industrial programming language...\n\nStill we can follow Haskell now and try to leave space for future addition of dependent-types.\nThis is still a problem and should be solved for successful language design.\n\nOne low hanging fruit on this path is language identifiers.\nWe should probably not make any assumption about identifier character case\nand treat upper-case and lower-case identifiers without any prejudices\nfrom the start to avoid many Haskell's problems that lead to awkward syntax.\n\n### Metaprogramming ###\n\nIt is still an open question whether non-strict functional language needs\nreal metaprogramming.\n\nMetaprogramming is easy in Lisp since Lisp has little syntax, but\nit is definitly trickier in language like Haskell.\n\nMetaprogramming brings quasi-quotation to the table.\nAfter being watching for Yesod project I would really like to avoid\nany quasi-quotation abuse and not provide any form of quasi-quotation\naltogether.\n\nMy feeling is that metaprogramming shouldn't be easy.\nWe should really try to explore language limitations and\ntry to design language around them and\nnot drink metaprogramming kool-aid.\n\nWhat I see as a real need for metaprogramming is compile-time\nprocessing of some externally provided data. For example,\nweb-development requires HTML-code generation.\nHTML-code is usually developed externally to provide\neasy sharing and possible independent/design-driven development.\nWhen HTML-code is developed independently we would like\nto statically check and generate code to render application data\ninto given HTML-templates.\n\nAnother example is generation interoperability routines\nfrom some standard interface-description language,\nfor instance generation of Document Object Model (DOM) implementation\nusing IDL-description provided by W3C.\n\nThis can be achieved with metaprogramming facilities.\nConsidering above examples we can conclude that\nmetaprogramming needs IO or at least an ability to read file.\n\nMetaprogramming facilities needs to define language\nthat can be used to generate expressions/declarations.\nMy point is that this language should not be\nfull-fledged mirror of original language.\nGenerated language doesn't need syntactic-sugar.\nThere is simply no need for it since you can always implement\nany sugar you want in host-language.\n\nEven if metaprogramming is not implemented at first\nwe must leave space for it to be possibly bolted on later.\nTherefore it is better to have two defined language levels from the start:\nkernel-language, full-language.\n\nKernel-language is a language without syntactic-sugar.\nHaskell doesn't have it's variant of kernel language.\nGHC has *core*-language, but it is too low level and\nHaskell to *core* translation is [really](GHC implementation) not obvious.\n\n### Compilation and Run-time ###\n\nThis topic is akin discussion of lazyness.\nWhat I want to state is that VM is needed for language, similar to JVM.\nVM is the only known way to get both modularity and performance.\n\nVM provides:\n\n * Fast compilation times.\n   Go language proves that this may be very important to some people.\n\n * Fast execution of modular code.\n   VM has no limitations on optimizing little functions spread out across bunch of modules.\n\nOptimizing compilers can be used like GHC, but they has costs\n\n * Much longer compilation times.\n   Compiler needs to optimize everything unlike VM that can optimize hot spots only.\n\n * True separate compilation can't be implemented because\n   cross-module inlining prevents it.\n   You need to recompile every dependent module even if module interface doesn't change\n   to prevent inlining of old (bad) version of some function.\n\nTo get true benefits from VM. VM (byte-)code should be best suited for optimization.\nThis means that code should be high-level enough.\nGHC Core language should be VM's \"byte-code\" if we going to build VM for GHC-Haskell,\nbecause GHC performs most of it's optimizations on Core-level.\n\nI'd like to use supercompilation as an optimization technique, since it get some\n[good results](citation needed). If we what to use supercompilation as a VM's\noptimization technique, we should use STG language as a VM's byte-code.\n\nThis brings us to \"Just use JVM\" sentiment.\nThe problem with JVM is that if we really need speed then JVM will be a limitation in the end.\nJVM will not be able to optimize things that we really think needs optimizing for our language,\nlist deforestarization for example.\nIf we don't need speed we can easily write simple interpreter and be done.\nIf we write simple interpreter we can later implement optimization and\njust-in-time compilation. If we choose JVM we are stuck with the choices of JVM-developers.\n\n\"Just use JVM\" sentiment has another side.\nIt references that Java has lots of libraries and tools built for it and\nwe can just reuse them.\nBut Java-ecosystem is not unique, Python has lots of high-quality libraries,\nGNU ecosystem built around C-programming language has a lot to offer.\nWe may be better with some flexible interoperability scheme than to tie our selfs with\none particular ecosystem.\n\nAnother point to remind is that lots of Java-tools are not tied to Java at all.\nYou can use Java-IDEs and Java build-tools for languages other than Java.\n\n### Dependency management ###\n\nIt seems that dependency management across libraries is not a language design problem.\nBut there are evidences that successful solution requires at least some support from language level.\nIn Java's case such a [solution](http://openjdk.java.net/projects/jigsaw/) has lead to an extention of original language.\nI've found [Version SAT essay](https://research.swtch.com/version-sat) very inspireing and\nI think it should be great to adopt it's proposed solution to dependency management.\n\n\u003e\n\u003e One way to avoid NP-completeness is to attack assumption 1:\n\u003e what if, instead of allowing a dependency to list specific package versions,\n\u003e a dependency can only specify a minimum version?\n\u003e Then there is a trivial algorithm for finding the packages to use:\n\u003e\n\u003e  * start with the newest version of what you want to install, and then\n\u003e  * get the newest version of all its dependencies, recursively.\n\u003e\n\u003e In the original diamond dependency at the beginning of this article,\n\u003e A needs B and C, and B and C need different versions of D.\n\u003e If B needs D 1.5 and C needs D 1.6, the build can use D 1.6 for both.\n\u003e If B doesn’t work with D 1.6, then either the version of B we’re considering is buggy or D 1.6 is buggy.\n\u003e The buggy version should be removed from circulation entirely, and then a new released version should fix the problem.\n\u003e Adding a conflict to the dependency graph instead is like documenting a bug instead of fixing it.\n\u003e\n\u003e Another way to avoid NP-completeness is to attack assumption 4:\n\u003e what if two different versions of a package could be installed simultaneously?\n\u003e Then almost any search algorithm will find a combination of packages to build the program;\n\u003e it just might not be the smallest possible combination (that’s still NP-complete).\n\u003e If B needs D 1.5 and C needs D 2.2, the build can include both packages in the final binary,\n\u003e treating them as distinct packages.\n\u003e I mentioned above that there can’t be two definitions of printf built into a C program,\n\u003e but languages with explicit module systems should have no problem including separate copies of D\n\u003e (under different fully-qualified names) into a program.\n\u003e\n\u003e Another way to avoid NP-completeness is to combine the previous two.\n\u003e As the examples already hint at, if packages follow semantic versioning,\n\u003e a package manager might automatically use the newest version of a dependency within a major version\n\u003e but then treat different major versions as different packages.\n\u003e\n\nSo, I'd like to adopt last metnioned solution and allow different major versions of the same package to\nbe used simulateously. As mentioned above it required language level support.\nAs I see it we souldn't allow direct dependencies on two different major versions of the same libraries,\nbut different major verions can be selected during transitive dependency-resolution.\n\nFor our language to support this we should make compiler to bind code to major version of dependency during compilation.\nCompiled artifacts should know wich major version of artifact it should bind to.\nSource code shouldn't contaign this knowledge, it should be sepcified in some module description used by\ncompiler during build.\n\n### Layout rule and curly braces ###\n\nI have no strong opinions about layout rule. But lately I've stated to think that it brings more\ncomplications than benefits. Modern languages like rust and ruby seems to get away without layout processing.\nHaskell's layout rule was one of the obstacles when I've been learning language.\nEven now the fact that Haskell's parser fixes parsing errors by automatically inserting closing curly braces\nmakes me uncomfortable.\n\n### Numbers and literals ###\n\nHaskell as it is has two simple problems with it's built in syntax and types.\n\nFirst is it's reliance on `Integer`-type. Integer is unbounded integer type.\nThere reality is that `Integer` is not usually useful as it is for many programs, but\nnevertheless it is used by almost all Haskell code, since any numeric literal\nimplicitly creates `Integer` and then convert `Integer` to some actually used type\n(fast system dependent `Int`). Fast `Integer` implementation is not trivial and is not\neasily found on many platforms (Javascript for instance).\n\nIt seems to me that large numeric literals are almost never used in actual code.\nIt may be better to get rid of reliance on `Integer`-type.\nGo-language has polymorphic numeric literals like Haskell do, but doesn't rely on some\nunbounded integer type. Another problem with `Integer` is overflow.\nWhat if literal is too large for required type? With Haskell's `Num` class we get run-time error\nwhen calling `fromInteger` method. It should be better to always get compile-time errors on such occasions.\n\nWe can solve these problems by introducing hierarchy of classes instead of simple `fromInteger` method.\nWe can introduce `FromWord8`, `FromWord16`, `FromWord32` classes and choose least required class\ndepending on actual literal value.\n\n`8` is a syntactic-sugar for `fromWord8 (8::Word8)`. `300` is a syntactic sugar for `fromWord16 (300::Word16)`.\n\nWith such class hierarchy we can get rid of unbounded `Integer` type on platforms where it is problematic.\nWe get compile-time error `Word8 doen't implement FromWord16 interface` for expression `300::Word8`\nwith such class hierarchy.\n\nAnother Haskell pain-point is negative numbers. Should minus operator be unary operator?\nShould minus operator be part of number syntax? I'm inclined to cut this knot and say that\nminus is *always* infix operator and there will be no negative number syntax.\n\nYou can always write `0 - 5` or `negative 5` to denote negative number.\nAnd this is an actual expression and not a single token.\nSuch expressions can be optimized away to behave like compile-time constants.\nBut in the end we get simple and uniform language without strange dark corners.\n\n### Final syntax examples ###\n\n````\npackage argo.util\n\nimport argo.util.Functor\n\ndata module List a implements Functor:\n    public constructor cons :: \u003ca\u003e a -\u003e List a\n    public constructor nil :: \u003ca\u003e List a\n\n    public append :: List a -\u003e List a -\u003e List a\n    append (cons? x xs) ys = cons x (append xs ys)\n    append nil? ys = ys\n\n    public reverse :: \u003ca\u003e List a -\u003e List a\n    reverse (cons? x xs) = append (reverse xs) (cons x nil)\n    reverse nil? = nil\n\n    Functor.fmap f (cons? x xs) = cons (f x) (Functor.fmap f xs)\n    Functor.fmap f nil? = nil\n````\n\nWe may possibly leave out layout rules...\n\n````\npackage ru.mobico.sviperll.test;\n\nimport argo.lang.System;\nimport argo.lang.Unit;\nimport argo.lang.IO;\n\nmodule Main {\n    public main1 :: IO Unit;\n    main1 = do {\n        name \u003c- System.readLine;\n        System.putStrLn message;\n    } where {\n        message = \"Hello, World!\";\n    }\n\n    public main2 :: IO Unit;\n    main2 =\n        do {\n            name \u003c- System.readLine;\n            System.putStrLn message;\n        } where {\n            message = \"Hello, World!\";\n        }\n\n    public main3 :: IO Unit;\n    main3 = do {\n        name \u003c- System.readLine;\n        System.putStrLn message;\n    } where {\n        message = \"Hello, World!\";\n    }\n\n    public map :: \u003ca b\u003e (a -\u003e b) - \u003e List a -\u003e List b;\n    map f (cons? x xs) = cons (f x) (map f xs);\n    map f nil? = nil;\n\n    fibs :: List Int;\n    fibs = cons 1 $ cons 1 $ zipWith (+) fibs (tail fibs);\n\n    qsort :: \u003ca\u003e List a -\u003e List a;\n    qsort nil? = nil;\n    qsort (cons? x xs) = append (qsort ls) (cons x $ qsort rs)\n        where (ls, rs) = partition (\u003c x) xs;\n\n    fact :: \u003ca\u003e a -\u003e a where {a implements Num};\n    fact x\n        if (x \u003c= 0) = 1\n        else = x * fact (x - 1)\n}\n\n````\n\n\n","funding_links":[],"categories":["Uncategorized"],"sub_categories":["Uncategorized"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsviperll%2Fargo","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsviperll%2Fargo","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsviperll%2Fargo/lists"}