{"id":13412784,"url":"https://github.com/viant/afs","last_synced_at":"2025-04-08T12:07:47.796Z","repository":{"id":39162392,"uuid":"203229722","full_name":"viant/afs","owner":"viant","description":"Abstract File Storage","archived":false,"fork":false,"pushed_at":"2024-04-24T20:17:26.000Z","size":345,"stargazers_count":301,"open_issues_count":3,"forks_count":36,"subscribers_count":16,"default_branch":"master","last_synced_at":"2024-07-31T20:51:26.283Z","etag":null,"topics":["file-manager","filesystem","s3","scp","tar","zip"],"latest_commit_sha":null,"homepage":null,"language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/viant.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2019-08-19T18:43:38.000Z","updated_at":"2024-07-21T11:10:01.000Z","dependencies_parsed_at":"2024-04-24T21:29:59.919Z","dependency_job_id":"c7b0aabb-ab11-4cb8-8bcb-b48f5c44fa97","html_url":"https://github.com/viant/afs","commit_stats":null,"previous_names":[],"tags_count":84,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/viant%2Fafs","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/viant%2Fafs/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/viant%2Fafs/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/viant%2Fafs/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/viant","download_url":"https://codeload.github.com/viant/afs/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247838444,"owners_count":21004580,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["file-manager","filesystem","s3","scp","tar","zip"],"created_at":"2024-07-30T20:01:29.157Z","updated_at":"2025-04-08T12:07:47.778Z","avatar_url":"https://github.com/viant.png","language":"Go","readme":"# afs - abstract file storage\n\n[![GoReportCard](https://goreportcard.com/badge/github.com/viant/afs)](https://goreportcard.com/report/github.com/viant/afs)\n[![GoDoc](https://godoc.org/github.com/viant/afs?status.svg)](https://godoc.org/github.com/viant/afs)\n![goversion-image](https://img.shields.io/badge/Go-1.11+-00ADD8.svg)\n\u003ca href='https://github.com/jpoles1/gopherbadger' target='_blank'\u003e![gopherbadger-tag-do-not-edit](https://img.shields.io/badge/Go%20Coverage-78%25-brightgreen.svg?longCache=true\u0026style=flat)\u003c/a\u003e\n\n\nPlease refer to [`CHANGELOG.md`](CHANGELOG.md) if you encounter breaking changes.\n\n- [Motivation](#motivation)\n- [Introduction](#introduction)\n- [Usage](#usage)\n- [Matchers](#matchers)\n- [Content modifiers](#content-modifiers)\n- [Streaming data](#streaming-data)\n- [Options](#options)\n- [Storage Implementations](#storage-implementations)\n- [Testing mode](#testing-mode)\n- [Storage Manager](#storage-managers)\n- [GoCover](#gocover)\n- [License](#license)\n- [Credits and Acknowledgements](#credits-and-acknowledgements)\n\n## Motivation\n\nWhen dealing with various storage systems, like cloud storage, SCP, container or local file system, using shared API for typical storage operation provides an excellent simplification.\nWhat's more, the ability to simulate storage-related errors like Auth or EOF allows you to test an app error handling.\n\n## Introduction\n\nThis library uses a storage manager abstraction to provide an implementation for a specific storage system with following \n\n* **CRUD Operation:**\n\n```go\nList(ctx context.Context, URL string, options ...Option) ([]Object, error)\n\nWalk(ctx context.Context, URL string, handler OnVisit, options ...Option) error\n\nOpen(ctx context.Context, object Object, options ...Option) (io.ReadCloser, error)\n\nOpenURL(ctx context.Context, URL string, options ...Option) (io.ReadCloser, error)\n\n\nUpload(ctx context.Context, URL string, mode os.FileMode, reader io.Reader, options ...Option) error\n\nCreate(ctx context.Context, URL string, mode os.FileMode, isDir bool, options ...Option) error\n\nDelete(ctx context.Context, URL string, options ...Option) error\n``` \n\n* **Batch uploader:**\n\n```go\ntype Upload func(ctx context.Context, parent string, info os.FileInfo, reader io.Reader) error\n \nUploader(ctx context.Context, URL string, options ...Option) (Upload, io.Closer, error)\n```\n\n* **Utilities:**\n\n```go\n\nCopy(ctx context.Context, sourceURL, destURL string, options ...Option) error\n\nMove(ctx context.Context, sourceURL, destURL string, options ...Option) error\n\nNewWriter(ctx context.Context, URL string, mode os.FileMode, options ...storage.Option) (io.WriteCloser, error)\n\nDownloadWithURL(ctx context.Context, URL string, options ...Option) ([]byte, error)\n\nDownload(ctx context.Context, object Object, options ...Option) ([]byte, error)\n\n```\n\n\nURL scheme is used to identify storage system, or alternatively relative/absolute path can be used for local file storage.\nBy default, all operations using the same baseURL share the same corresponding storage manager instance.\nFor example, instead supplying SCP auth details for all operations, auth option can be used only once.\n\n```go\n\nfunc main() {\n\n    ctx := context.Background()\n    {\n        //auth with first call \n        fs := afs.New()\n        defer fs.Close()\n        keyAuth, err := scp.LocalhostKeyAuth(\"\")\n        if err != nil {\n           log.Fatal(err)\n        }\n        reader1, err := fs.OpenURL(ctx, \"scp://host1:22/myfolder/asset.txt\", keyAuth)\n        if err != nil {\n               log.Fatal(err)\n        }\n        ...\n        reader2, err := fs.OpenURL(ctx, \"scp://host1:22/myfolder/asset.txt\", keyAuth)\n    }\n    \n    {\n        //auth per baseURL \n        fs := afs.New()\n        err = fs.Init(ctx, \"scp://host1:22/\", keyAuth)\n        if err != nil {\n            log.Fatal(err)\n        }\n        defer fs.Destroy(\"scp://host1:22/\")\n        reader, err := fs.OpenURL(ctx, \"scp://host1:22/myfolder/asset.txt\")\n     }\n}\n\n```\n\n## Usage\n\n##### Downloading location content\n\n```go\nfunc main() {\n\t\n    fs := afs.New()\n    ctx := context.Background()\n    objects, err := fs.List(ctx, \"/tmp/folder\")\n    if err != nil {\n        log.Fatal(err)\n    }\n    for _, object := range objects {\n        fmt.Printf(\"%v %v\\n\", object.Name(), object.URL())\n        if object.IsDir() {\n            continue\n        }\n        reader, err := fs.Open(ctx, object)\n        if err != nil {\n            log.Fatal(err)\n        }\n        data, err := ioutil.ReadAll(reader)\n        if err != nil {\n            log.Fatal(err)\n        }\n        fmt.Printf(\"%s\\n\", data)\n    }\n}\n```\n\n##### Uploading Content\n\n```go\nfunc main() {\n\t\n    fs := afs.New()\n    ctx := context.Background()\n    keyAuth, err := scp.LocalhostKeyAuth(\"\")\n    if err != nil {\n        log.Fatal(err)\n    }\n    err  = fs.Init(ctx, \"scp://127.0.0.1:22/\", keyAuth)\n    if err != nil {\n        log.Fatal(err)\n    }\t\n    err = fs.Upload(ctx, \"scp://127.0.0.1:22/folder/asset.txt\", 0644, strings.NewReader(\"test me\"))\n    if err != nil {\n        log.Fatal(err)\n    }\n    ok, err := fs.Exists(ctx, \"scp://127.0.0.1:22/folder/asset.txt\")\n    if err != nil {\n        log.Fatal(err)\n    }\n    fmt.Printf(\"has file: %v\\n\", ok)\n    _ = fs.Delete(ctx, \"scp://127.0.0.1:22/folder/asset.txt\")\n}\n```\n##### Uploading Content With Writer\n\n```go\nfunc main() {\n\t\n    fs := afs.New()\n    ctx := context.Background()\n    keyAuth, err := scp.LocalhostKeyAuth(\"\")\n    if err != nil {\n        log.Fatal(err)\n    }\n    err  = fs.Init(ctx, \"scp://127.0.0.1:22/\", keyAuth)\n    if err != nil {\n        log.Fatal(err)\n    }\t\n    writer = fs.NewWriter(ctx, \"scp://127.0.0.1:22/folder/asset.txt\", 0644)\n    _, err := writer.Write([]byte(\"test me\")))\n    if err != nil {\n        log.Fatal(err)\n    }\n    err = writer.Close()\n    if err != nil {\n        log.Fatal(err)\n    }\n    ok, err := fs.Exists(ctx, \"scp://127.0.0.1:22/folder/asset.txt\")\n    if err != nil {\n        log.Fatal(err)\n    }\n    fmt.Printf(\"has file: %v\\n\", ok)\n    _ = fs.Delete(ctx, \"scp://127.0.0.1:22/folder/asset.txt\")\n}\n```\n\n\n\n##### Data Copy\n\n```go\nfunc main() {\n\n    fs := afs.New()\n    ctx := context.Background()\n    keyAuth, err := scp.LocalhostKeyAuth(\"\")\n    if err != nil {\n        log.Fatal(err)\n    }\n    err = fs.Copy(ctx, \"s3://mybucket/myfolder\", \"scp://127.0.0.1/tmp\", option.NewSource(), option.NewDest(keyAuth))\n    if err != nil {\n        log.Fatal(err)\n    }\n}\n```\n\n##### Archiving content\n\n```go\n\nfunc main() {\n\t\n    secretPath := path.Join(os.Getenv(\"HOME\"), \".secret\", \"gcp-e2e.json\")\n    auth, err := gs.NewJwtConfig(option.NewLocation(secretPath))\n    if err != nil {\n        return\n    }\n    sourceURL := \"mylocalPath/\"\n    destURL := \"gs:mybucket/test.zip/zip://localhost/dir1\"\n    fs := afs.New()\n    ctx := context.Background()\n    err = fs.Copy(ctx, sourceURL, destURL, option.NewDest(auth))\n    if err != nil {\n        log.Fatal(err)\n    }\n\n}\t\n```\n\n\n##### Archive Walker\n\nWalker can be created for tar or zip archive.\n\n```go\nfunc main() {\n\t\n    ctx := context.Background()\n\tfs := afs.New()\n\twalker := tar.NewWalker(s3afs.New())\n\terr := fs.Copy(ctx, \"/tmp/test.tar\", \"s3:///dest/folder/test\", walker)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n```\n\n\n##### Archive Uploader\n\nUploader can be created for tar or zip archive.\n\n```go\nfunc main() {\n\t\n    ctx := context.Background()\n\tfs := afs.New()\n\tuploader := zip.NewBatchUploader(gsafs.New())\n\terr := fs.Copy(ctx, \"gs:///tmp/test/data\", \"/tmp/data.zip\", uploader)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n```\n\n\n##### Data Move\n\n```go\nfunc main() {\n\t\n    fs := afs.New()\n\tctx := context.Background()\n\tkeyAuth, err := scp.LocalhostKeyAuth(\"\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\terr = fs.Move(ctx, \"/tmp/transient/app\", \"scp://127.0.0.1/tmp\", option.NewSource(), option.NewDest(keyAuth))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\t\n```\n\n##### Batch Upload\n\n```go\nfunc main() {\n\t\n    fs := afs.New()\n\tctx := context.Background()\n\tupload, closer, err := fs.Uploader(ctx, \"/tmp/clone\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tdefer closer.Close()\n\tassets := []*asset.Resource{\n\t\tasset.NewFile(\"asset1.txt\", []byte(\"test 1\"), 0644),\n\t\tasset.NewFile(\"asset2.txt\", []byte(\"test 2\"), 0644),\n\t\tasset.NewDir(\"folder1\", file.DefaultDirOsMode),\n\t\tasset.NewFile(\"folder1/asset1.txt\", []byte(\"test 3\"), 0644),\n\t\tasset.NewFile(\"folder1/asset2.txt\", []byte(\"test 4\"), 0644),\n\t}\n\tfor _, asset := range assets {\n\t\trelative := \"\"\n\t\tvar reader io.Reader\n\t\tif strings.Contains(asset.Name, \"/\") {\n\t\t\trelative, _ = path.Split(asset.Name)\n\t\t}\n\t\tif ! asset.Dir {\n\t\t\treader = bytes.NewReader(asset.Data)\n\t\t}\n\t\terr = upload(ctx, relative, asset.Info(), reader)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t}\n}\n```\n## Matchers\n\nTo filter source content you can use [Matcher](option/matcher.go) option. \nThe following have been implemented.\n\n\n**[Basic Matcher](matcher/basic.go)**\n\n```go\nfunc main() {\n\t\n    matcher, err := NewBasic(\"/data\", \".avro\", nil)\n    fs := afs.New()\n    ctx := context.Background()\n    err := fs.Copy(ctx, \"/tmp/data\", \"s3://mybucket/data/\", matcher.Match)\n    if err != nil {\n        log.Fatal(err)\n    }\n}\n```\n\nExclusion\n\n```go\nfunc main() {\n\t\n    matcher := matcher.Basic{Exclusion:\".+/data/perf/\\\\d+/.+\"}\n    fs := afs.New()\n    ctx := context.Background()\n    err := fs.Copy(ctx, \"/tmp/data\", \"s3://mybucket/data/\", matcher.Match)\n    if err != nil {\n        log.Fatal(err)\n    }\n}\n```\n\n**[Filepath matcher](matcher/filepath.go)**\n\nOS style filepath match, with the following terms:\n- '*'         matches any sequence of non-Separator characters\n- '?'         matches any single non-Separator character\n- '[' [ '^' ] { character-range } ']'\n\n```go\n\nfunc main() {\n\t\n    matcher := matcher.Filepath(\"*.avro\")\n    fs := afs.New()\n    ctx := context.Background()\n    err := fs.Copy(ctx, \"/tmp/data\", \"gs://mybucket/data/\", matcher)\n    if err != nil {\n        log.Fatal(err)\n    }\n}\t\n\t\t\n\n```\n\n**[Ignore Matcher](matcher/ignore.go)**\n\nIgnore matcher represents matcher that matches file that are not in the ignore rules.\nThe syntax of ignore borrows heavily from that of .gitignore; see https://git-scm.com/docs/gitignore or man gitignore for a full reference.\n\n\n```go\nfunc mian(){\n\t\n\tignoreMatcher, err := matcher.NewIgnore([]string{\"*.txt\", \".ignore\"})\n  \t//or matcher.NewIgnore(option.NewLocation(\".cloudignore\"))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfs := afs.New()\n\tctx := context.Background()\n\tobjects, err := fs.List(ctx, \"/tmp/folder\", ignoreMatcher.Match)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfor _, object := range objects {\n\t\tfmt.Printf(\"%v %v\\n\", object.Name(), object.URL())\n\t\tif object.IsDir() {\n\t\t\tcontinue\n\t\t}\n\t}\n}\t\n```\n\n\n\n**[Modification Time Matcher](matcher/modification.go)**\n\nModification Time Matcher represents matcher that matches file that were modified either before or after specified time.\n\n```go\nfunc mian(){\n\t\n\tbefore, err := toolbox.TimeAt(\"2 days ago in UTC\")\n    if err != nil {\n\t\tlog.Fatal(err)\n\t}\t\n\tmodTimeMatcher, err := matcher.NewModification(before, nil)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfs := afs.New()\n\tctx := context.Background()\n\tobjects, err := fs.List(ctx, \"/tmp/folder\", modTimeMatcher.Match)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfor _, object := range objects {\n\t\tfmt.Printf(\"%v %v\\n\", object.Name(), object.URL())\n\t\tif object.IsDir() {\n\t\t\tcontinue\n\t\t}\n\t}\n}\t\n```\n\n\n## Content modifiers\n\nTo modify resource content on the fly you can use [Modifier](option/modifier.go) option.\n\n```go\nfunc main() {\n\tfs := afs.New()\n\tctx := context.Background()\n\tsourceURL := \"file:/tmp/app.war/zip://localhost/WEB-INF/classes/config.properties\"\n\tdestURL := \"file:/tmp/app.war/zip://localhost/\"\n\terr := fs.Copy(ctx, sourceURL, destURL, modifier.Replace(map[string]string{\n\t\t\"${changeMe}\": os.Getenv(\"USER\"),\n\t}))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n```\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"log\"\n\t\"github.com/viant/afs\"\n\t\"io\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"strings\"\n)\n\nfunc modifyContent(info os.FileInfo, reader io.ReadCloser) (closer io.ReadCloser, e error) {\n   if strings.HasSuffix(info.Name() ,\".info\") {\n       data, err := ioutil.ReadAll(reader)\n       if err != nil {\n           return nil, err\n       }\n       _ = reader.Close()\n       expanded := strings.Replace(string(data), \"$os.User\", os.Getenv(\"USER\"), 1)\n       reader = ioutil.NopCloser(strings.NewReader(expanded))\n   }\n   return reader, nil\n}                           \n\nfunc main() {\n\n    fs := afs.New()\n    reader ,err := fs.OpenURL(context.Background(), \"s3://mybucket/meta.info\", modifyContent)\n    if err != nil {\n        log.Fatal(err)\t\n    }\n    \n    defer reader.Close()\n    content, err := ioutil.ReadAll(reader)\n    if err != nil {\n        log.Fatal(err)\t\n    }\n    fmt.Printf(\"content: %s\\n\", content)\n\t\n}\n```\n\n\n### Streaming data\n\nStreaming data allows data reading and uploading in chunks with small memory footprint.\n\n\n```go\n\n    jwtConfig, err := gs.NewJwtConfig()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tctx := context.Background()\n\tfs := afs.New()\n\tsourceURL := \"gs://myBucket/path/myasset.gz\"\n\treader, err := fs.OpenURL(ctx, sourceURL, jwtConfig, option.NewStream(64*1024*1024, 0))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n    \n\t_ = os.Setenv(\"AWS_SDK_LOAD_CONFIG\", \"true\")\n\tdestURL := \"s3://myBucket/path/myasset.gz\"\n\terr = fs.Upload(ctx, destURL, 0644, reader, \u0026option.Checksum{Skip:true})\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n    // or\n    writer = fs.NewWriter(ctx, destURL, 0644, \u0026option.Checksum{Skip:true})\n    _, err = io.Copy(writer, reader)\n    if err != nil {\n        log.Fatal(err)\n    }\n    err = writer.Close()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n```\n\n\n\n## Options\n\n\n* **[Page](option/page.go)**\n\nTo control number and position of listed resources you can yse page option.\n\n* **[Timeout](option/timeout.go)**\n\nProvider specific timeout.\n\n* **[BasicAuth](option/cred.go)** \n\nProvides user/password auth.\n\n* **Source \u0026 Dest Options**\n\nGroups options by source or destination options. This options work with Copy or Move operations.\n\n```go\n\nfunc main() {\n\t\n    fs := afs.New()\n    secretPath :=  path.Join(os.Getenv(\"HOME\"), \".secret\",\"gcp.json\")\n    jwtConfig, err := gs.NewJwtConfig(option.NewLocation(secretPath))\n    if err != nil {\n    \tlog.Fatal(err)\n    }\n    sourceOptions := option.NewSource(jwtConfig)\n    authConfig, err := s3.NewAuthConfig(option.NewLocation(\"aws.json\"))\n    if err != nil {\n        log.Fatal(err)\n    }\n    destOptions := option.NewDest(authConfig)\n\terr = fs.Copy(ctx, \"gs://mybucket/data\", \"s3://mybucket/data\",  sourceOptions, destOptions)\n}\n\n```\n\n\n* **[option.Checksum](option/checksum.go)** skip computing checksum if Skip is  set, this option allows streaming upload in chunks\n* **[option.Stream](option/stream.go)**: download reader reads data with specified stream PartSize \n\n\n\n\n\nCheck out [storage manager](#storage-managers) for additional options. \n\n## Storage Implementations\n\n- [File](file/README.md)\n- [In Memory](mem/README.md)\n- [SSH - SCP](scp/README.md)\n- [HTTP](http/README.md)\n- [Tar](tar/README.md)\n- [Zip](zip/README.md)\n- [GCP - GS](https://github.com/viant/afsc/tree/master/gs)\n- [AWS - S3](https://github.com/viant/afsc/tree/master/s3)\n\n## Testing fs\n\nTo unit test all storage operation all in memory you can use faker fs.\n\nIn addition you can use error options to test exception handling.\n\n- **DownloadError**\n```go\n\nfunc mian() {\n\tfs := afs.NewFaker()\n\tctx := context.Background()\n\terr := fs.Upload(ctx, \"gs://myBucket/folder/asset.txt\", 0, strings.NewReader(\"some data\"), option.NewUploadError(io.EOF))\n\tif err != nil {\n\t\tlog.Fatalf(\"expect upload error: %v\", err)\n\t}\n}\n```\n\n- **ReaderError**\n```go\n\nfunc mian() {\n    fs := afs.NewFaker()\n\tctx := context.Background()\n\terr := fs.Upload(ctx, \"gs://myBucket/folder/asset.txt\", 0, strings.NewReader(\"some data\"), option.NewDownloadError(io.EOF))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\t_, err = fs.OpenURL(ctx, \"gs://myBucket/folder/asset.txt\")\n\tif err != nil {\n\t\tlog.Fatalf(\"expect download error: %v\", err)\n\t}\n}\n```\n\n- **UploadError**\n```go\n\nfunc mian() {\n    fs := afs.NewFaker()\n    ctx := context.Background()\n    err := fs.Upload(ctx, \"gs://myBucket/folder/asset.txt\", 0, strings.NewReader(\"some data\"), option.NewUploadError(io.EOF))\n    if err != nil {\n        log.Fatalf(\"expect upload error: %v\", err)\n    }\n}\n```\n\n\n##### Code generation for static or in memory go file\n\n\nGenerate with mem storage\n\n```\npackage main\n\nimport (\n    \"log\"\n    \"github.com/viant/afs/parrot\n)\n\nfunc mian() {\n  ctx := context.Background()\n  err := parrot.GenerateWithMem(ctx, \"pathToBinaryAsset\", \"gen.go\", false)\n  if err != nil {\n    log.Fatal(err)\n  }\n}\n\n```\n\nGenerate static data files\n\n```\npackage main\n\nimport (\n    \"log\"\n    \"github.com/viant/afs/parrot\n)\n\nfunc mian() {\n  ctx := context.Background()\n  err := parrot.Generate(ctx, \"pathToBinaryAsset\", \"data/\", false)\n  if err != nil {\n    log.Fatal(err)\n  }\n}\n\n```\n\n\n## Test setup utilities\n\nPackage [asset](asset) defines basic utilities to quickly manage asset related unit tests.\n\n```go\n\nfunc Test_XXX(t *testing.T) {\n\n    var useCases = []struct {\n\t\tdescription string\n\t\tlocation    string\n\t\toptions     []storage.Option\n\t\tassets      []*asset.Resource\n\t}{\n\n\t}\n\n\tctx := context.Background()\n\tfor _, useCase := range useCases {\n\t\tfs := afs.New()\n\t\tmgr, err := afs.Manager(useCase.location, useCase.options...)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t\terr = asset.Create(mgr, useCase.location, useCase.assets)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t\t\n\t\t//... actual app logic\n\n\t\tactuals, err := asset.Load(mgr, useCase.location)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n        for _, expect := range useCase.assets {\n            actual, ok := actuals[expect.Name]\n            if !assert.True(t, ok, useCase.description+\": \"+expect.Name+fmt.Sprintf(\" - actuals: %v\", actuals)) {\n                continue\n            }\n            assert.EqualValues(t, expect.Name, actual.Name, useCase.description+\" \"+expect.Name)\n            assert.EqualValues(t, expect.Mode, actual.Mode, useCase.description+\" \"+expect.Name)\n            assert.EqualValues(t, expect.Dir, actual.Dir, useCase.description+\" \"+expect.Name)\n            assert.EqualValues(t, expect.Data, actual.Data, useCase.description+\" \"+expect.Name)\n        }\n\n\t\t_ = asset.Cleanup(mgr, useCase.location)\n\n\t}\n}\n\n```\n\n\n\n\n## GoCover\n\n[![GoCover](https://gocover.io/github.com/viant/afs)](https://gocover.io/github.com/viant/afs)\n\n\n\u003ca name=\"License\"\u003e\u003c/a\u003e\n## License\n\nThe source code is made available under the terms of the Apache License, Version 2, as stated in the file `LICENSE`.\n\nIndividual files may be made available under their own specific license,\nall compatible with Apache License, Version 2. Please see individual files for details.\n\n\u003ca name=\"Credits-and-Acknowledgements\"\u003e\u003c/a\u003e\n\n## Credits and Acknowledgements\n\n**Library Author:** Adrian Witas\n\n","funding_links":[],"categories":["HarmonyOS","Files","File Handling","Relational Databases","文件处理","文件处理`处理文件和文件系统操作的库`"],"sub_categories":["Windows Manager","Advanced Console UIs","Search and Analytic Databases","SQL 查询语句构建库","检索及分析资料库"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fviant%2Fafs","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fviant%2Fafs","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fviant%2Fafs/lists"}