{"id":28454944,"url":"https://github.com/fsprojects/fsharp-ai-tools","last_synced_at":"2025-10-24T11:59:15.668Z","repository":{"id":44659537,"uuid":"162639631","full_name":"fsprojects/fsharp-ai-tools","owner":"fsprojects","description":"TensorFlow API for F# + F# for AI Models eDSL","archived":false,"fork":false,"pushed_at":"2022-02-01T16:25:27.000Z","size":6732,"stargazers_count":214,"open_issues_count":3,"forks_count":17,"subscribers_count":28,"default_branch":"master","last_synced_at":"2025-06-28T16:44:16.046Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"F#","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/fsprojects.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-12-20T22:54:12.000Z","updated_at":"2024-12-02T02:26:36.000Z","dependencies_parsed_at":"2022-09-12T09:41:04.973Z","dependency_job_id":null,"html_url":"https://github.com/fsprojects/fsharp-ai-tools","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/fsprojects/fsharp-ai-tools","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fsprojects%2Ffsharp-ai-tools","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fsprojects%2Ffsharp-ai-tools/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fsprojects%2Ffsharp-ai-tools/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fsprojects%2Ffsharp-ai-tools/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/fsprojects","download_url":"https://codeload.github.com/fsprojects/fsharp-ai-tools/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fsprojects%2Ffsharp-ai-tools/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":280791360,"owners_count":26391692,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-24T02:00:06.418Z","response_time":73,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-06-06T21:14:22.812Z","updated_at":"2025-10-24T11:59:15.663Z","avatar_url":"https://github.com/fsprojects.png","language":"F#","readme":"This repo contains archival material about \"F# for AI Models\". \n\nContents: \n\n* FM: An F# DSL for AI Models with separated shape checking and tooling\n\n  **FM was a prototype F# eDSL for writing numeric models.  It has now been subsumed by [DiffSharp 1.0](https://diffsharp.github.io).**\n\n* The TensorFlow API for F# \n\n  **This is now archived. We recommend TensorFlow.NET or [DiffSharp 1.0](https://diffsharp.github.io).**\n\n* Live Checking Tooling for AI models\n\n  **This is now being merged to [DiffSharp 1.0](https://github.com/DiffSharp/DiffSharp/pull/207).**\n\n* fsx2nb **now part of [the fsdocs tool](https://fsprojects.github.io/FSharp.Formatting/commandline.html).**\n\n# ARCHIVAL MATERIAL: FM: An F# DSL for AI Models \n\n\nModels written in FM can be passed to \noptimization and training algorithms utilising automatic differentiation without\nany change to modelling code, and can be executed on GPUs and TPUs using TensorFlow.\n\nThere is also experimental tooling for interactive tensor shape-checking, inference, tooltips and other nice things. \n\nThis is a POC that it is possible to configure F# to be suitable for authoring AI models. We\nexecute them as real, full-speed TensorFlow graphs, achieving cohabitation and win-win with the TF ecosystem.\nLive trajectory execution tooling gives added correctness guarantees and developer productivity interactively.\n\nFM is implemented in the FSAI.Tools package built in this repo.\n\nThe aim of FM is to support the authoring of numeric functions and AI models - including\nneural networks - in F# code. For example:\n\n```fsharp\n/// A numeric function of two parameters, returning a scalar, see\n/// https://en.wikipedia.org/wiki/Gradient_descent\nlet f (xs: DT\u003cdouble\u003e) = \n    sin (v 0.5 * sqr xs.[0] - v 0.25 * sqr xs.[1] + v 3.0) * -cos (v 2.0 * xs.[0] + v 1.0 - exp xs.[1])\n```\n\nThese functions and models can then be passed to optimization algorithms that utilise gradients, e.g.\n\n```fsharp\n// Pass this Define a numeric function of two parameters, returning a scalar\nlet train numSteps = GradientDescent.train f (vec [ -0.3; 0.3 ]) numSteps\n\nlet results = train 200 |\u003e Seq.last\n```\n\nFM supports the live \"trajectory\" checking of key correctness properties of your numeric code,\nincluding vector, matrix and tensor size checking, and tooling to interactively report the sizes.  To active\nthis tooling you need to specify a `LiveCheck` that is interactively executed by the experimental tooling\ndescribed further below.\n\n```fsharp\n[\u003cLiveCheck\u003e] \nlet check1 = train 4 |\u003e Seq.last \n```\nWhen using live-checks, underlying tensors are not actually populated with data - instead only their\nshapes are analyzed.  Arrays and raw numerics values are computed as normal.\n\nTypically each model is equipped with one `LiveCheck` that instantiates the model on training data.\n\n\n### ARCHIVAL MATERIAL: Optimization algorithms utilising gradients\n\nThe aim of FM is to allow the clean description of numeric code and yet still allow this code to be\neither executed using TensorFlow and - in the future - other tensor fabrics such as Torch (TorchSharp)\nand DiffSharp.  These fabrics automatically compute the gradients of your models and functions with respect to\nmodel parameters and/or function inputs.  Gradients are usually computed inside an optimization\nalgorithm.\n\nFor example, a naive version of Gradient Descent is shown below:\n\n```fsharp\nmodule GradientDescent =\n\n    // Note, the rate in this example is constant. Many practical optimizers use variable\n    // update (rate) - often reducing.\n    let rate = 0.005\n\n    // Gradient descent\n    let step f xs =   \n        // Get the partial derivatives of the function\n        let df xs =  fm.diff f xs  \n        printfn \"xs = %A\" xs\n        let dzx = df xs \n        // evaluate to output values \n        xs - v rate * dzx |\u003e fm.eval\n\n    let train f initial steps = \n        initial |\u003e Seq.unfold (fun pos -\u003e Some (pos, step f pos)) |\u003e Seq.truncate steps \n```\n\nNote the call is `fm.diff` - FM allows optimizers to derive the gradients of FM\nfunctions and models in a way inspired by the design of `DiffSharp`. For example:\n\n```fsharp\n// Define a function which will be executed using TensorFlow\nlet f x = x * x + v 4.0 * x \n\n// Get the derivative of the function. This computes \"x*2 + 4.0\"\nlet df x = fm.diff f x  \n\n// Run the derivative \ndf (v 3.0) |\u003e fm.RunScalar // returns 6.0 + 4.0 = 10.0\n```\n\nTo differentiate a scalar function with multiple input variables:\n\n```fsharp\n// Define a function which will be executed using TensorFlow\n// computes [ x1*x1*x3 + x2*x2*x2 + x3*x3*x1 + x1*x1 ]\nlet f (xs: DT\u003c'T\u003e) = sum (xs * xs * fm.Reverse xs)\n\n// Get the partial derivatives of the scalar function\n// computes [ 2*x1*x3 + x3*x3; 3*x2*x2; 2*x3*x1 + x1*x1 ]\nlet df xs = fm.diff f xs   \n\n// Run the derivative \ndf (vec [ 3.0; 4.0; 5.0 ]) |\u003e fm.RunArray // returns [ 55.0; 48.0; 39.0 ]\n```\n\n### ARCHIVAL MATERIAL: A Larger Example\n\nBelow we show fitting a linear model to training data, by differentiating a loss function w.r.t. coefficients, and optimizing\nusing gradient descent (200 data points generated by linear  function, 10 parameters, linear model).\n\n```fsharp\nmodule ModelExample =\n\n    let modelSize = 10\n\n    let checkSize = 5\n\n    let trainSize = 500\n\n    let validationSize = 100\n\n    let rnd = Random()\n\n    let noise eps = (rnd.NextDouble() - 0.5) * eps \n\n    /// The true function we use to generate the training data (also a linear model plus some noise)\n    let trueCoeffs = [| for i in 1 .. modelSize -\u003e double i |]\n\n    let trueFunction (xs: double[]) = \n        Array.sum [| for i in 0 .. modelSize - 1 -\u003e trueCoeffs.[i] * xs.[i]  |] + noise 0.5\n\n    let makeData size = \n        [| for i in 1 .. size -\u003e \n            let xs = [| for i in 0 .. modelSize - 1 -\u003e rnd.NextDouble() |]\n            xs, trueFunction xs |]\n         \n    /// Make the data used to symbolically check the model\n    let checkData = makeData checkSize\n\n    /// Make the training data\n    let trainData = makeData trainSize\n\n    /// Make the validation data\n    let validationData = makeData validationSize\n \n    let prepare data = \n        let xs, y = Array.unzip data\n        let xs = batchOfVecs xs\n        let y = batchOfScalars y\n        (xs, y)\n\n    /// evaluate the model for input and coefficients\n    let model (xs: DT\u003cdouble\u003e, coeffs: DT\u003cdouble\u003e) = \n        fm.Sum (xs * coeffs, axis= [| 1 |])\n           \n    let meanSquareError (z: DT\u003cdouble\u003e) tgt = \n        let dz = z - tgt \n        fm.Sum (dz * dz) / v (double modelSize) / v (double z.Shape.[0].Value) \n\n    /// The loss function for the model w.r.t. a true output\n    let loss (xs, y) coeffs = \n        let y2 = model (xs, batchExtend coeffs)\n        meanSquareError y y2\n          \n    let validation coeffs = \n        let z = loss (prepare validationData) (vec coeffs)\n        z |\u003e fm.eval\n\n    let train inputs steps =\n        let initialCoeffs = vec [ for i in 0 .. modelSize - 1 -\u003e rnd.NextDouble()  * double modelSize ]\n        let inputs = prepare inputs\n        GradientDescent.train (loss inputs) initialCoeffs steps\n           \n    [\u003cLiveCheck\u003e]\n    let check1 = train checkData 1  |\u003e Seq.last\n\n    let learnedCoeffs = train trainData 200 |\u003e Seq.last |\u003e fm.toArray\n         // [|1.017181246; 2.039034327; 2.968580146; 3.99544071; 4.935430581;\n         //   5.988228378; 7.030374908; 8.013975714; 9.020138699; 9.98575733|]\n\n    validation trueCoeffs\n\n    validation learnedCoeffs\n```\n\nMore examples/tests are in [dsl-live.fsx](https://github.com/fsprojects/FSAI.Tools/blob/master/examples/dsl/dsl-live.fsx).\n\nThe approach scales to the complete expression of deep neural networks \nand full computation graphs. The links below show the implementation of a common DNN sample (the samples may not\nyet run, this is wet paint):\n\n* [NeuralStyleTransfer in DSL form](https://github.com/fsprojects/FSAI.Tools/blob/master/examples/dsl/NeuralStyleTransfer-dsl.fsx)\n\nThe design is intended to allow alternative execution with Torch or DiffSharp.\nDiffSharp may be used once Tensors are available in that library.\n\n\n### ARCHIVAL MATERIAL: Technical notes:\n\n* `DT` stands for `differentiable tensor` and the one type of `DT\u003c_\u003e` values are used to represent differentiable scalars, vectors, matrices and tensors.\n  If you are familiar with the design of `DiffSharp` there are similarities here: DiffSharp defines `D` (differentiable scalar), `DV` (differentiable\n  vector), `DM` (differentiable matrix).\n\n* `fm.gradients` is used to get gradients of arbitrary outputs w.r.t. arbitrary inputs\n\n* `fm.diff` is used to differentiate of `R^n -\u003e R` scalar-valued functions (loss functions) w.r.t. multiple input variables. If \n  a scalar input is used, a single total deriative is returned. If a vector of inputs is used, a vector of\n  partial derivatives are returned.\n\n* In the prototype, all gradient-based functions are implemented using TensorFlow's `AddGradients`, i.e. the C++ implementation of\n  gradients. THis has many limitations.\n\n* `fm.*` is a DSL for expressing differentiable tensors using the TensorFlow fundamental building blocks.  The naming\n  of operators in this DSL are currently TensorFLow specific and may change.\n\n* A preliminary pass of shape inference is performed _before_ any TensorFlow operations are performed.  This\n  allows you to check the shapes of your differentiable code independently of TensorFlow's shape computations.\n  A shape inference system is used which allows for many shapes to be inferred and is akin to F# type inference.\n  It also means not all TensorFlow automatic shape transformations are applied during shape inference.\n\n# ARCHIVAL MATERIAL: The TensorFlow API for F# \n\nSee `FSAI.Tools`.  This API is designed in a similar way to `TensorFlowSharp`, but is implemented directly in F# and\ncontains some additional functionality.\n\n# Live Checking Tooling for AI models\n\nARCHIVAL MATERIAL\n\nThere is some tooling to do \"live trajectory execution\" of models and training on limited training sets,\nreporting tensor sizes and performing tensor size checking.\n\nLiveCheck for a vector addition:\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/7204669/52524060-90eee980-2c90-11e9-9b0e-2752480dbe7d.gif\" width=\"512\"  title=\"LiveCheck example for vectors\"\u003e\n\u003c/p\u003e\n\n\nLiveCheck for a DNN:\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/7204669/52758231-6c33a280-2fff-11e9-9098-2c47b60f71fe.gif\" width=\"512\"  title=\"LiveCheck example for vectors\"\u003e\n\u003c/p\u003e\n\n\n1. Clone the necessary repos\n\n       git clone http://github.com/dotnet/fsharp\n       git clone http://github.com/fsprojects/FSharp.Compiler.PortaCode\n       git clone http://github.com/fsprojects/fsharp-ai-tools\n\n2. Build the VS tooling with the extensibility \"hack\" to allow 3rd party tools to add checking and tooltips\n\n       cd fsharp\n       git fetch https://github.com/dsyme/fsharp livecheck\n       git checkout livecheck\n       .\\build.cmd\n       cd ..\n\n3. Compile the extra tool\n\t\n       dotnet build FSharp.Compiler.PortaCode\n\n4. Compile this repo\n\n       dotnet build fsharp-ai-tools\n\n5. Start the tool and edit using experimental VS instance\n\n       cd fsharp-ai-tools\\examples\n       devenv.exe /rootsuffix RoslynDev\n       ..\\..\\..\\FSharp.Compiler.PortaCode\\FsLive.Cli\\bin\\Debug\\net471\\FsLive.Cli.exe --eval --writeinfo --watch --vshack --livechecksonly  --define:LIVECHECK dsl-live.fsx\n\n       (open dsl-live.fsx)\n\n# ARCHIVAL MATERIAL: fsx2nb\n\nThere is a separate tool `fsx2nb` in the repo to convert F# scripts to F# Jupyter notebooks:\n\n    dotnet fsi tools\\fsx2nb.fsx -i script\\sample.fsx\n\nThese scripts use the following elements:\n\n\n    (**markdown \n    \n    *)\n\n    (**cell *)   -- delimits between two code cells\n    \n    (**ydec xyz *)   -- this adds 'xyz' to a code cell for use fo producing visual outputs\n    \n    #if INTERACTIVE   -- this is removed in a code block\n    ...\n    #endif\n\n    #if COMPILED   -- this is removed in a code block\n    ...\n    #endif\n    \n    \n    #if NOTEBOOK   -- this is kept and the #if are removed\n    ...\n    #endif\n\n\n\n# Building\n\n    dotnet build\n    dotnet test\n    dotnet pack\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffsprojects%2Ffsharp-ai-tools","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffsprojects%2Ffsharp-ai-tools","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffsprojects%2Ffsharp-ai-tools/lists"}