{"id":17006489,"url":"https://github.com/frostmarked/bonparent","last_synced_at":"2025-04-12T06:32:36.747Z","repository":{"id":119857120,"uuid":"280132521","full_name":"frostmarked/bonParent","owner":"frostmarked","description":"Parent project for limousin.se","archived":false,"fork":false,"pushed_at":"2021-08-01T07:45:30.000Z","size":554,"stargazers_count":14,"open_issues_count":0,"forks_count":4,"subscribers_count":0,"default_branch":"master","last_synced_at":"2025-03-26T02:04:31.211Z","etag":null,"topics":["angular","cows","jhipster","kubernetes","microservices","spring"],"latest_commit_sha":null,"homepage":"https://limousin.se","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/frostmarked.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2020-07-16T11:09:11.000Z","updated_at":"2024-08-01T06:07:12.000Z","dependencies_parsed_at":"2023-06-03T10:00:31.992Z","dependency_job_id":null,"html_url":"https://github.com/frostmarked/bonParent","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frostmarked%2FbonParent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frostmarked%2FbonParent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frostmarked%2FbonParent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frostmarked%2FbonParent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/frostmarked","download_url":"https://codeload.github.com/frostmarked/bonParent/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248529900,"owners_count":21119580,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["angular","cows","jhipster","kubernetes","microservices","spring"],"created_at":"2024-10-14T05:05:58.763Z","updated_at":"2025-04-12T06:32:36.048Z","avatar_url":"https://github.com/frostmarked.png","language":"Shell","readme":"# bonParent\n\nIndependent parent project for holding common configuration and documentation for its children:\n* https://github.com/frostmarked/bonGateway\n* https://github.com/frostmarked/bonReplicaService\n* https://github.com/frostmarked/bonLivestockService\n* https://github.com/frostmarked/bonContentService\n\n## Overview\n\nIm going to create an over-engineered website for my cows!\u003cbr/\u003e\nWhy? Learning by doing!\n\nIts located on https://limousin.se \u003cbr\u003e\nJHipster Registry on https://registry.limousin.se and the \u003cbr\u003eJHipster Console on https://console.limousin.se\u003cbr\u003e\nMy production k8s cluster (or is it a playground cluster??) is running on https://www.scaleway.com\u003cbr\u003e\nCI/CD with the help of GitHub Actions \u003cbr\u003e\nStatic code analysis from https://sonarcloud.io/ \u003cbr\u003e\nPerformance boost is provided by https://www.cloudflare.com/\n\u003cbr\u003e\u003cbr\u003e\u003cbr\u003e\n\n\n[JHipster](https://www.jhipster.tech/) and [JHipster JDL](https://www.jhipster.tech/jdl) will be the backbone in powering the below sketch of the planned architecture.\u003cbr/\u003e\nhttps://github.com/frostmarked/bonParent/blob/master/com-bonlimousin-jhipster-jdl.jdl\n\nHopefully I will make time to keep the documentation up-to-date of my findings during development with JHipster, good and OFI (opportunity for improvement). Also going to keep the project and its code as open as possible. \nCurrently a few kubernetes files (secrets, config-maps) that is not in any repository.\n\nThe plan is to stay as close as possible to default and best practice, according to JHipster docs.\u003cbr/\u003e\nBut every now and then Ill probably try something different. See [Slightly different trail](#differenttrail)\n\n![Sketch of planned architecture](docs/balsamiq/overview.png)\n\n## Projects\nPurpose of each project and maybe a few notes about tech, beyond what can be read at [JHipster](https://www.jhipster.tech/) \n\nSonar Cloud is setup for all projects\nhttps://sonarcloud.io/organizations/frostmarked/projects\n\nOf course the plan is to pass the sonar quality gate. We will see how much effort I want to put into that...\n\n### Bon Gateway\nUI that displays my cattle to the public.\n\nSigned in mode for specfic users that get access to enhanced data and functions.\n\nBFF (Backend-for-frontend) setup so that the APIs can provide aggregated data from the downstream services.\n\nhttps://github.com/frostmarked/bonGateway\n\n![](https://github.com/frostmarked/bonGateway/workflows/Bon%20Gateway%20CI/badge.svg)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=frostmarked_bonGateway\u0026metric=alert_status)](https://sonarcloud.io/dashboard?id=frostmarked_bonGateway)\n\n### Bon Replica Service\nProvides a clone of data from the central swedish registry for domestic animals.\n\nhttps://github.com/frostmarked/bonReplicaService\n\n![](https://github.com/frostmarked/bonReplicaService/workflows/Bon%20Replica%20Service%20CI/badge.svg)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=frostmarked_bonReplicaService\u0026metric=alert_status)](https://sonarcloud.io/dashboard?id=frostmarked_bonReplicaService)\n\n### Bon Livestock Service\nAdditional data about cows that is of more or less interest. e.g. images etc\n\nhttps://github.com/frostmarked/bonLivestockService\n\n![](https://github.com/frostmarked/bonLivestockService/workflows/Bon%20Livestock%20Service%20CI/badge.svg)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=frostmarked_bonLivestockService\u0026metric=alert_status)](https://sonarcloud.io/dashboard?id=frostmarked_bonLivestockService)\n\n### Bon Content Service\nVery small and simple CMS that can hold some kind of newsworthy text.\n\nhttps://github.com/frostmarked/bonContentService\n\n![](https://github.com/frostmarked/bonContentService/workflows/Bon%20Content%20Service%20CI/badge.svg)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=frostmarked_bonContentService\u0026metric=alert_status)](https://sonarcloud.io/dashboard?id=frostmarked_bonContentService)\n\n## Slightly different trail\n\u003ca name=\"differenttrail\"\u003e\u003c/a\u003e\n\n### Parent project\nThis project, bonParent, i think is a better way of keeping track of shared files for and from JHipster that dont belong in any specfic project. It also give me a place to gather my documentation that is not project specfic. And as a bonus I could build all project using standard maven mechanics. Why would I do that? Well, in the future I plan to create end-to-end tests in this project.\n\n### Kubernetes\nOne database to rule them all. By default I get 4 postgresql and then my computer dies... \u003cbr/\u003e\nLocally and in production ill be using one db with 4 schemas.\u003cbr/\u003e\nProd environmet is hosted by https://www.scaleway.com/ and provides managed kubernetes and managed postgresql.\u003cbr/\u003e\nhttps://github.com/frostmarked/bonParent/tree/master/k8s\n\nPullPolicy is set to always. Not sure if that is necessary or good.\u003cbr\u003e\nAnyway, the plan was to always run version latest and just redploy by doing\n```\nkubectl rollout restart deployment/bongateway -n bonlimousin\n```\nif it exists a newer version.\n\n*TODO*\u003cbr\u003e\nProbably time to revert this pull policy change. All release pipelines are setup and stable.\n\n### Docker-compose for bonGateway\nWIP: Made my own docker-compose file for running all docker images except the gateway in some hybrid mode so they can communicate. Still need some testing ...\u003cbr/\u003e\nWhy? Lazy way of starting all others apps so development can begin and end faster. \n\n### GraphQL\nCurrently the plan is to make use of GraphQL. The schema is a pure translation of the websites public OAS. That is built using [Doing API-First development](https://www.jhipster.tech/doing-api-first-development/).\n\nUsing https://github.com/IBM/openapi-to-graphql/ for translation of OAS.\n\nUsing https://www.apollographql.com/docs/angular/ as client-side lib\u003cbr\u003e\nplus https://graphql-code-generator.com/ for generating typescript from graphql schema\n\nUsing https://www.graphql-java-kickstart.com/spring-boot/ as server-side lib\u003cbr\u003e\nplus https://github.com/kobylynskyi/graphql-java-codegen/tree/master/plugins/maven for generating java from schema\n\n**Note** \nGraphQl with kickstart breaks regular test environment.\nHandle for now according to issue on github\nhttps://github.com/graphql-java-kickstart/graphql-spring-boot/issues/230\n```\nspring:\n  autoconfigure:\n    exclude:\n      - graphql.kickstart.spring.web.boot.GraphQLWebAutoConfiguration\n      - graphql.kickstart.spring.web.boot.GraphQLWebsocketAutoConfiguration\n```\n\nand use @GraphQLTest annotation on test class\n\nMore information can be found in https://github.com/frostmarked/bonGateway/blob/master/README.md#doing-graphql-with-oas-spec\n\u003cbr\u003e\u003cbr\u003e\n\n\n### Only admins can register new users\nDisabled public registration of new users. The website will have a signed in mode eventually but I only want certain people to have access, without invite email.\u003cbr/\u003e\nBy toggling a boolean spring property its back to default.\u003cbr/\u003e\n\n### Maven CI Friendly Versions\nImplemented https://maven.apache.org/maven-ci-friendly.html\n\n### GitHub Actions\nModified the result from\n```\njhipster ci-cd\n```\n\nThe pipeline take different routes depending on if its triggered by a release or not.\u003cbr/\u003e\nExcept for difference in versioning of artifact and image the release flow will rollout a new version to given kubernetes cluster.\n\n### Liquibase with spring profile prod\nPuh... the app did not start... why why why???\n\nYou might think it did not start due to something with eureka and the discovery client. \nAfter all the last bit of information you get from the log is:  \n```\n2020-07-25 09:16:18.793  INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : Saw local status change event StatusChangeEvent [timestamp=1595668578792, current=UP, previous=STARTING]\n2020-07-25 09:16:18.806  INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_BONGATEWAY/bongateway:f33e529ccc6fdd5eebba10e2679c4082: registering service...\n2020-07-25 09:16:19.099  INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_BONGATEWAY/bongateway:f33e529ccc6fdd5eebba10e2679c4082 - registration status: 204\n2020-07-25 09:16:19.205  INFO 1 --- [           main] c.b.gateway.config.WebConfigurer         : Web application configuration, using profiles: prod\n2020-07-25 09:16:19.206  INFO 1 --- [           main] c.b.gateway.config.WebConfigurer         : Web application fully configured\n2020-07-25 09:16:27.795  INFO 1 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration\n2020-07-25 09:16:42.797  INFO 1 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration\n2020-07-25 09:16:43.523  WARN 1 --- [scoveryClient-0] c.netflix.discovery.TimedSupervisorTask  : task supervisor timed out\n\njava.util.concurrent.TimeoutException: null\n\tat java.base/java.util.concurrent.FutureTask.get(Unknown Source)\n\tat com.netflix.discovery.TimedSupervisorTask.run(TimedSupervisorTask.java:68)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)\n\tat java.base/java.util.concurrent.FutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n\tat java.base/java.lang.Thread.run(Unknown Source)\n\n```\n\nDont be fooled like me...\nIts actually liquibase that found a lock. \nBut! the default setup from jhipster makes that very important piece of info go away in silence. \nOr should I blame Liquibase?... is it not obvious that it should be a warning???\nAnyway setting logging in prod profile to DEBUG solved the mystery.\n\n```\n2020-07-25 09:25:29.166  INFO 1 --- [           main] liquibase.executor.jvm.JdbcExecutor      : SELECT COUNT(*) FROM public.databasechangeloglock\n2020-07-25 09:25:29.172  INFO 1 --- [           main] liquibase.executor.jvm.JdbcExecutor      : SELECT COUNT(*) FROM public.databasechangeloglock\n2020-07-25 09:25:29.180  INFO 1 --- [           main] liquibase.executor.jvm.JdbcExecutor      : SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1\n2020-07-25 09:25:29.187  INFO 1 --- [           main] l.lockservice.StandardLockService        : Waiting for changelog lock....\n```\n\nProbably the container/app got terminated in a bad state. Unlock Liquibase with:\n```\nUPDATE DATABASECHANGELOGLOCK SET LOCKED=false, LOCKGRANTED=null, LOCKEDBY=null where ID=1;\n```\n\nFrom now on I will explicitly set logging level in prod profile to: \n\n```\nlogging:\n  level:\n    ROOT: INFO\n    io.github.jhipster: INFO\n    com.bonlimousin.gateway: INFO\n    liquibase: INFO\n```\n\nOr should I use logback-spring.xml ?\n\n### Kafka with spring-kafka\nI guess there is a good explanation to why jhipster does not use spring-kafka.\u003cbr/\u003e\nDue to I have not figured that out I will use spring-kafka. \nCan not see any point in writing code that has already been written.\u003cbr/\u003e \nThe change pretty much boils down to:\n\nSwap kafka-clients dependency for \n```\n\u003cdependency\u003e\n\t\u003cgroupId\u003eorg.springframework.kafka\u003c/groupId\u003e\n\t\u003cartifactId\u003espring-kafka\u003c/artifactId\u003e\n\u003c/dependency\u003e\n```\n\nAnd use existing KafkaProperties for config of templates and listeners \n```\n@Configuration\npublic class KafkaConfiguration {\n\n\t@Autowired\n\tprivate KafkaProperties kafkaProperties;\n\n\t@Bean\n\tpublic ProducerFactory\u003cString, Object\u003e producerFactory() {\n\t\treturn new DefaultKafkaProducerFactory\u003c\u003e(kafkaProperties.getProducerProps());\n\t}\n\n\t@Bean\n\tpublic KafkaTemplate\u003cString, Object\u003e kafkaTemplate() {\n\t\treturn new KafkaTemplate\u003c\u003e(producerFactory());\n\t}\n\n\t@Bean\n\tpublic ConsumerFactory\u003cString, Object\u003e consumerFactory() {\n\t\treturn new DefaultKafkaConsumerFactory\u003c\u003e(kafkaProperties.getConsumerProps());\n\t}\n\n\t@Bean\n\tpublic ConcurrentKafkaListenerContainerFactory\u003cString, Object\u003e kafkaListenerContainerFactory() {\n\t\tConcurrentKafkaListenerContainerFactory\u003cString, Object\u003e factory = new ConcurrentKafkaListenerContainerFactory\u003c\u003e();\n\t\tfactory.setConsumerFactory(consumerFactory());\n\t\treturn factory;\n\t}\n}\n``` \n\n### Testing with Kafka\nIt exists a few reasonable tools and strategies. And yes, testcontainers kafka seems superior.\u003cbr/\u003e\n```\n\u003cdependency\u003e\n\t\u003cgroupId\u003eorg.testcontainers\u003c/groupId\u003e\n\t\u003cartifactId\u003ekafka\u003c/artifactId\u003e\n\t\u003cscope\u003etest\u003c/scope\u003e\n\u003c/dependency\u003e\n```\n\nI thought it was obvious at first but had to iterate several times \nbefore deciding for below solution.\n\nCreated a test config that will hi-jack kafkaproperties \nand override bootstrap-servers property, if kafkacontainer is running. \n```\n@TestConfiguration\npublic class KafkaTestConfiguration {\n\t\n\tpublic static boolean started = false;\n\tpublic static KafkaContainer kafkaContainer;\n\t\n    @Autowired\n    public void kafkaProperties(KafkaProperties kafkaProperties) {\n    \tif(started) {\n    \t\tkafkaProperties.setBootStrapServers(kafkaContainer.getBootstrapServers());\t\n    \t}\n    }\n    \n    public static void startKafka() {\n    \tkafkaContainer = new KafkaContainer(\"5.5.0\").withNetwork(null);\n        kafkaContainer.start();\n        started = true;\n    }\n    \n    public static void stopKafka() {\n    \tif(started) {\n    \t\tkafkaContainer.stop();\n    \t}\n    }\n}\n```\n\nThe test class will then use it as follows.\u003cbr/\u003e\nNote: Remember to consumer.commitSync(); if you plan to have several tests so the previous records \ndo not linger around\n```\n@SpringBootTest(classes = { BonReplicaServiceApp.class, KafkaTestConfiguration.class })\nclass MyKafkaIT {\n\n\t@Autowired\n\tprivate ConsumerFactory\u003cString, Object\u003e consumerFactory;\n\n\t@BeforeAll\n\tpublic static void setup() {\n\t\tKafkaTestConfiguration.startKafka();\n\t}\n\t\n\t@Test\t\n\t@Transactional\n\tvoid testA() {\n\t\t// trigger broadcasting of topic\n\t\tConsumerRecords\u003cString, Object\u003e records = consumeChanges();\t\t\t\t\n\t\tassertThat(records.count()).isEqualTo(1);\n\t\tConsumerRecord\u003cString, Object\u003e record = records.iterator().next();\n\t\tassertEquals(\"CREATE\", record.key());\n\t\t// and so on...\t\n\t}\n\t\n\tprivate ConsumerRecords\u003cString, Object\u003e consumeChanges() {\n\t\tConsumer\u003cString, Object\u003e consumer = consumerFactory.createConsumer();\t\t\n\t\tconsumer.subscribe(Collections.singletonList(\"MY_TOPIC\"));\n\t\tConsumerRecords\u003cString, Object\u003e records = consumer.poll(Duration.ofSeconds(2));\n\t\tconsumer.commitSync();\n\t\tconsumer.unsubscribe();\n\t\tconsumer.close();\n\t\treturn records;\n\t}\n\t\n\t@AfterAll\n\tpublic static void tearDown() {\n\t\tKafkaTestConfiguration.stopKafka();\n\t}\n}\n```\n\n### Entity change emitter\nThe apps uses Kafka and the jhipster module Entity Audit with Javers, \nhttps://www.jhipster.tech/modules/marketplace/#/details/generator-jhipster-entity-audit.\u003cbr/\u003e\n\nWhat can we do of that? Broadcast changes!\u003cbr/\u003e\n\nJavers has pointcuts on spring repositories. \nSo I will have a pointcut on javers and piggy back on its functionality. \n```\n@Component\n@Aspect\n@Order(Ordered.LOWEST_PRECEDENCE)\npublic class EntityChangeJaversAspect {\n\n\tprivate final EntityChangeService entityChangeService;\n\n\tpublic EntityChangeJaversAspect(EntityChangeService entityChangeService) {\n\t\tthis.entityChangeService = entityChangeService;\n\t}\n\n\t@AfterReturning(pointcut = \"execution(public * commit(..)) \u0026\u0026 this(org.javers.core.Javers)\", returning = \"commit\")\n\tpublic void onJaversCommitExecuted(JoinPoint jp, Commit commit) {\n\t\tthis.entityChangeService.broadcastEntityChange(commit);\n\t}\n} \n```\n\nA simple service for checking if there were any changes? \nand if so transform javers commit and send it with kafka\n```\n@Service\npublic class EntityChangeService {\n\n\tprivate final Logger log = LoggerFactory.getLogger(EntityChangeService.class);\n\n\tprivate final KafkaTemplate\u003cString, EntityChangeVO\u003e kafkaTemplate;\n\n\tpublic EntityChangeService(KafkaTemplate\u003cString, EntityChangeVO\u003e entityChangeKafkaTemplate) {\n\t\tthis.kafkaTemplate = entityChangeKafkaTemplate;\n\t}\n\n\tpublic void broadcastEntityChange(Commit commit) {\n\t\tif (commit.getSnapshots().isEmpty()) {\n\t\t\treturn;\n\t\t}\n\t\tCdoSnapshot snapshot = commit.getSnapshots().get(0);\n\t\tEntityChangeVO vo = CdoSnapshotToEntityChangeVOConverter.convert(snapshot, new EntityChangeVO());\n\t\tString topic = \"ENTITY_CHANGE_\" + getManagedTypeSimpleName(snapshot).toUpperCase();\n\t\tString key = vo.getAction();\n\t\tsend(new ProducerRecord\u003c\u003e(topic, key, vo));\n\t}\n\t\n\tpublic void send(ProducerRecord\u003cString, EntityChangeVO\u003e record) {\t\t\t\n\t\tkafkaTemplate.send(record).addCallback(\n\t\t\t\tresult -\u003e log.debug(\n\t\t\t\t\t\t\"Sent entity-change-topic {} with key {} and changes to params {} with resulting offset {} \",\n\t\t\t\t\t\trecord.topic(), record.key(), record.value().getChangedEntityFields(), result.getRecordMetadata().offset()),\n\t\t\t\tex -\u003e log.error(\"Failed to send entity-change-topic {} with key {} and changes to params {} due to {} \",\n\t\t\t\t\t\trecord.topic(), record.key(), record.value().getChangedEntityFields(), ex.getMessage(), ex));\n\t}\n\n\tprotected static String getManagedTypeSimpleName(CdoSnapshot snapshot) {\n\t\tString className = snapshot.getManagedType().getName();\n\t\treturn className.substring(className.lastIndexOf('.') + 1);\n\t}\n\n}\n```\n\nNote: code is currently in bonReplicaService and a listener in bonLivestockService.\nNote2: the emit might be \"false-positive\". Depending on the entity change belongs to a transaction \nthat later on will roll back so will the javers commit. And you will end up with no trace in db of given sent out data.   \n\n\n### Prototype - Ahead of time (or JIT) responsive images\nI like to have all data in the database, which includes images. Lazy and simple.\u003cbr\u003e\nI also like html5 srcset for images. The browser can pick a suitable image depending on \ndevice and context.\u003cbr\u003e\nSo, instead of returning a high resolution image encoded in base64 every time. I use Thumbnailator\n```\n\u003cdependency\u003e\n\t\u003cgroupId\u003enet.coobird\u003c/groupId\u003e\n\t\u003cartifactId\u003ethumbnailator\u003c/artifactId\u003e\n\t\u003cversion\u003e${net.coobird.thumbnailator.version}\u003c/version\u003e\n\u003c/dependency\u003e\n```\n\nNot sure if its a valid plan to use bootstrap containers max width as base for image size... \u003cbr\u003e\nFor now it will have to do\n```\npublic enum PictureSize {\n\tORIGINAL(null), SMALL(540), MEDIUM(720), LARGE(960), XL(1140);\n```\n\nCurrently its very simple. If no image exist on disk, create it \n```\nPath path = Paths.get(imageBaseDir, imageName);\nif (!Files.exists(path)) {\n\ttry (ByteArrayInputStream bais = new ByteArrayInputStream(image)) {\n\t\tif (pictureSize.pixelWidth() != null) {\n\t\t\tThumbnails.of(bais).width(pictureSize.pixelWidth()).toFile(path.toFile());\n\t\t} else {\n\t\t\tThumbnails.of(bais).scale(1).toFile(path.toFile());\n\t\t}\n\t}\n}\n```\n\nThe public APIs (rest and graphql) will from now on return several urls instead of a base64 string.\u003cbr\u003e\nA directive can help with populating the img attributes, srcset and src.  \n```\nimport { Directive, ElementRef, Input, Renderer2, OnChanges } from '@angular/core';\nimport { PictureVo, Maybe } from '../../bonpublicgraphql/bonpublicgraphql';\nimport { pickPictureSourceUrl } from 'app/shared/bon/picturevo-util';\n\n@Directive({\n  selector: '[jhiCowPicture]',\n})\nexport class CowPictureDirective implements OnChanges {\n  @Input('jhiCowPicture')\n  picture?: Maybe\u003cPictureVo\u003e;\n  @Input()\n  targetWidth?: string;\n\n  constructor(private renderer: Renderer2, private el: ElementRef) {}\n\n  ngOnChanges(): void {\n    if (this.picture?.sources) {\n      const tw = this.targetWidth ? parseInt(this.targetWidth, 10) || 992 : 992;\n      const imgSrc = pickPictureSourceUrl(this.picture.sources, tw);\n      this.renderer.setAttribute(this.el.nativeElement, 'src', imgSrc);\n\n      const imgSrcSet = this.picture?.sources\n        .filter(ps =\u003e ps \u0026\u0026 ps.url !== imgSrc)\n        .map(ps =\u003e `${ps!.url} ${ps!.width}w`)\n        .join(',');\n      this.renderer.setAttribute(this.el.nativeElement, 'srcset', imgSrcSet);\n    }\n  }\n}\n\n```\n\nNote: Perhaps the image should be stored to a persistent volume instead of java.io.tmpdir \u003cbr\u003e\nNote2: Ingress/nginx will probably have a low/default value for request body size. Handle it with an annotation\n```\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n  name: bongateway\n  namespace: bonlimousin\n  annotations:\n    nginx.ingress.kubernetes.io/proxy-body-size: 8m\nspec:\n  rules:\n``` \n\nNote3:\nThe plan to let cloudflare save the day failed... The thumbnails are pulled from to often from the source.\nSo the stress on the servers is unnecessary high. A less crappy solution is to store the thumbnails on S3 instead. And return a pre-signed url to the bucket instead of streaming bytes from the app and cluster\n\n### Picsum Photos\n\nLorem ipsum photos from Picsum is great for developing. Beware that the JHipster Content Security Policy settings is tight. As it should be.\nBut just adding picsum.photos as image source is not enough. Picsum also uses subdomains so change CSP img-src accordingly\n\ne.g.\n````\nimg-src 'self' https://picsum.photos https://*.picsum.photos data:;\n````\n\n### HTTP Spinner aka API Spinner aka HTTP API Loader\n\nSo much asynchronous stuff going on... in order to visualize that I decided to add a simple bootstrap spinner when calls are made to either API, REST or GraphQL.\n\nSteal with pride is always my first strategy! After a few minutes of inspiration from several blogs I decided to implement \nhttps://medium.com/swlh/angular-loading-spinner-using-http-interceptor-63c1bb76517b . \u003cbr\u003e\nWhile looking at the result I saw OFIs (opportunity for improvement). Main differences are:\n1. In the service, store the number of requests to an url, instead of a boolean. Why? Actually no point storing a boolean, could use a list instead. But most of all, lots of request will go to the same url --\u003e graphql endpoint\n1. No custom spinner, just plain old bootstrap\n1. No subscription, let angular handle the observable with a async pipe\n1. The interceptor should use the rxjs operator tap, instead of map and catchError\n\nThe result was\n\n```\nimport { Injectable } from '@angular/core';\nimport { BehaviorSubject } from 'rxjs';\n\n@Injectable({\n  providedIn: 'root',\n})\nexport class SpinnerService {\n  spinnerSubject: BehaviorSubject\u003cboolean\u003e = new BehaviorSubject\u003cboolean\u003e(false);\n  requestMap: Map\u003cstring, number\u003e = new Map\u003cstring, number\u003e();\n\n  constructor() {}\n\n  addRequest(url: string): void {\n    if (!url) {\n      console.warn('URL is missing');\n      return;\n    } else if (!url.includes('api/') \u0026\u0026 !url.includes('graphql')) {\n      return;\n    }\n    const n = this.requestMap.get(url) || 0;\n    this.requestMap.set(url, n + 1);\n    this.spinnerSubject.next(true);\n  }\n\n  removeRequest(url: string): void {\n    if (!url) {\n      console.warn('URL is missing');\n      return;\n    }\n    const n = this.requestMap.get(url) || 0;\n    if (n \u003e 1) {\n      this.requestMap.set(url, n - 1);\n    } else {\n      this.requestMap.delete(url);\n    }\n\n    if (this.requestMap.size === 0) {\n      this.spinnerSubject.next(false);\n    }\n  }\n}\n```\n\n```\nimport { Injectable } from '@angular/core';\nimport { HttpRequest, HttpHandler, HttpEvent, HttpInterceptor, HttpResponse } from '@angular/common/http';\nimport { Observable } from 'rxjs';\nimport { tap } from 'rxjs/operators';\nimport { SpinnerService } from 'app/shared/bon/spinner/spinner.service';\n\n@Injectable()\nexport class SpinnerInterceptor implements HttpInterceptor {\n  constructor(private spinnerService: SpinnerService) {}\n\n  intercept(request: HttpRequest\u003cany\u003e, next: HttpHandler): Observable\u003cHttpEvent\u003cany\u003e\u003e {\n    this.spinnerService.addRequest(request.url);\n    return next.handle(request).pipe(\n      tap(\n        (evt: HttpEvent\u003cany\u003e) =\u003e {\n          if (evt instanceof HttpResponse) {\n            this.spinnerService.removeRequest(request.url);\n          }\n          return evt;\n        },\n        () =\u003e {\n          this.spinnerService.removeRequest(request.url);\n        }\n      )\n    );\n  }\n}\n```\n\nmain.component.ts constructor\n````\n// This prevents a ExpressionChangedAfterItHasBeenCheckedError for subsequent requests\nthis.spinner$ = this.spinnerService.spinnerSubject.pipe(delay(0));\n````\n\nmain.component.html\n```\n\u003cdiv class=\"clearfix\" *ngIf=\"spinner$ | async\"\u003e\n    \u003cdiv class=\"spinner-border spinner-border-sm float-right\" role=\"status\"\u003e\n        \u003cspan class=\"sr-only\"\u003eLoading...\u003c/span\u003e\n    \u003c/div\u003e\n\u003c/div\u003e\n```\n\n\u003cbr\u003e\u003cbr\u003e\u003cbr\u003e\n## Did I just find a bug???\n\n### Incorrect entity name in integration tests\nWhen generating code from JDL some ITs have lines like this\n```\nif (TestUtil.findAll(em, Cattle.class).isEmpty()) {\n```\nwhile it should be \n```\nif (TestUtil.findAll(em, CattleEntity.class).isEmpty()) {\n```\ndue to I use the jdl application config\n```\nentitySuffix Entity\n```\n\n\u003cbr\u003e\u003cbr\u003e\u003cbr\u003e\n## What in the name of some norse god!?\n\n### Why did I use camelCase in jdl basename???\nI should have used lower case and gotten rid of case sensitivity confusions\u003cbr\u003e\nand sometimes its just ugly...\n\n### Linage != Lineage\nIf you plan to misspell a word. Make sure to do it properly. Do it really really bad\u003cbr/\u003e\nso you dont find another word...\u003cbr/\u003e\nLinage: the number of lines in printed or written matter, especially when used to calculate payment.\u003cbr/\u003e\nLineage: lineal descent from an ancestor; ancestry or extraction\u003cbr/\u003e\n\n\n\u003cbr\u003e\u003cbr\u003e\u003cbr\u003e\n## Build\nGitHub Actions will be the main carrier of builds.\u003cbr\u003e\nEvery now or then when I build locally be sure to give it some extra memory...\n```\n./mvnw -Pprod verify jib:dockerBuild -DargLine=\"-Xmx1024m\"\n```\n\nAnd to publish the images to docker hub\n```\ndocker image tag bongateway frostmark/bongateway\ndocker push frostmark/bongateway\ndocker image tag boncontentservice frostmark/boncontentservice\ndocker push frostmark/boncontentservice\ndocker image tag bonlivestockservice frostmark/bonlivestockservice\ndocker push frostmark/bonlivestockservice\ndocker image tag bonreplicaservice frostmark/bonreplicaservice\ndocker push frostmark/bonreplicaservice\n```\n\n### Local CLI release for debug purpose\ne.g. bon-content-service\n```\nexport RELEASE_TAG=dlog1a\n./mvnw -Pprod verify jib:dockerBuild -DargLine=\"-Xmx1024m\" -DskipTests -Drevision=$RELEASE_TAG -Dchangelist=\ndocker image tag boncontentservice:$RELEASE_TAG frostmark/boncontentservice:$RELEASE_TAG\ndocker push frostmark/boncontentservice:$RELEASE_TAG\nkubectl set image --record deployment/boncontentservice boncontentservice-app=frostmark/boncontentservice:$RELEASE_TAG -n bonlimousin\n```\n\n## Run\n\n### docker-compose\nIntended for local testing\n\n### kubernetes\nCommon config of most. \nPrepaired for setup of configmap and secrets for different cloud managed k8s and db\n\nUse bonlimousin k8s context on Scaleway cluster\n```\nexport KUBECONFIG_SAVED=$HOME/.kube/config\nexport KUBECONFIG=$KUBECONFIG_SAVED:$HOME/myfolder/myprojects/bonlimousin_com/jhipworkspace/bonParent/k8s/scaleway/kubeconfig-k8s-bonlimousin.yml\nkubectl config use-context admin@ksbonlimousin\n```\n\nRedeploy the apps. Pull policy is set to always\n```\nkubectl rollout restart deployment/bongateway -n bonlimousin\nkubectl rollout restart deployment/boncontentservice -n bonlimousin\nkubectl rollout restart deployment/bonlivestockservice -n bonlimousin\nkubectl rollout restart deployment/bonreplicaservice -n bonlimousin\n```\n\nHow is it going?\n```\nkubectl get pods -n bonlimousin\n```\n\n### Restart app in cluster\n\nKill all pods by scaling deployment to zero replicas\n```\nkubectl scale deployment bongateway --replicas=0 -n bonlimousin\n```\n\nThen scale it back to 1 again\n```\nkubectl scale deployment bongateway --replicas=1 -n bonlimousin\n```\n\n### Kubernetes with ingress and letsencrypt on Scaleway\n\nInstall cert-manager on Scaleway cluster\n```\nkubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml\n```\n\nCopy the local example yml of Issuer from\n```\nlocalhost/bonconfig-k8s/bon-letsencrypt.yml\n```\nto\n```\nscaleway/bonconfig-k8s/bon-letsencrypt.yml\n```\n\nChange server field to production url\n```\nspec.acme.server: https://acme-v02.api.letsencrypt.org/directory\n```\n\nand install it\n```\nkubectl apply -f scaleway/bonconfig-k8s/bon-letsencrypt.yml \n```\n\n**Make sure that your DNS is pointing limousin.se to your ingress url** (plus the registry and console)\u003cbr\u003e\n\n#### Cloudflare can also manage tls\n\nAnd now it does for the root limousin.se.\u003cbr\u003e \nThe original reason that caused this change was that the load balancer changed ip due to the tear down and setup of cluster. \nThen the hand shake between letsencrypt and the cert manager failed for limousin.se,\nbut not for beta.limousin.se due to it does not use the load balancer\n\n### Elasticsearch settings - 7 bad years later\n\nBackground: In production, startup of bonContentService fails with the confusing message that a random index cant be created. \n\nIt will have to do, for now. \nhttps://selleo.com/til/posts/esrgfyxjee-how-to-fix-elasticsearch-forbidden12index-read-only\n\n*Year 8 - Part 2*\u003cbr\u003e\nBackground: The Spring Boot Actuator health check is not helping either... in production.\u003cbr\u003e\nCant really tell from the logs what goes wrong after a while. \nThe app starts rebooting and flagged as 503. \n\nDisabling the health check for ES in k8s config stopped this behaviour.\u003cbr\u003e\nSome day i need to find out why this happens.\n\n```\nenv:\n\t- name: MANAGEMENT_HEALTH_ELASTICSEARCH_ENABLED\n  \tvalue: 'false'\n```\n\n*Year 13 - Part 3*\u003cbr\u003e\n\nHmm .... hmmm ... while randomly clicking around in the k8s dashboard to see the status... a few records show up. A few supprising records.\nAccording to the dashboard I got two persistent volume claims called storage-jhipster-elasticsearch-data-0 and storage-jhipster-elasticsearch-master-0.\nCant remember creating them... cant find a reference either... but elasticsearch is using them!\u003cbr\u003e\nWill probably be a part 4 in this confusion. But for now incresing the size of them made the difference.\u003cbr\u003e\nAlso had few big indexes related to metrics. Removed them as well. Need to keep an eye on that in the future.\u003cbr\u003e\nSumma sumarum: disk was at low watermark\n\nhttps://www.datadoghq.com/blog/elasticsearch-unassigned-shards/#reason-5-low-disk-watermark\n\n*Year of the Pandemic - Part 4*\u003cbr\u003e\n\nSo here is part 4. Maybe not the part 4 I was waiting for...\u003cbr\u003e\nUntil the part 4 I was looking for arrives (read solution) , check the env every now and then \u003cbr\u003e\n\u003cbr\u003e\nFirst check disk space of elastic search pods or more accurately the PVCs that belong to pods.\n```\nkubectl exec jhipster-elasticsearch-data-0 df -n bonlimousin\nkubectl exec jhipster-elasticsearch-master-0 df -n bonlimousin\n```\nes volume is probably full so connect to elaticsearch master\n```\nkubectl port-forward jhipster-elasticsearch-master-0 9200:9200 -n bonlimousin\n```\nIf its full or almost full try to purge the logs index. \nBut maybe before that, have a peek of the status for logs and metrics  \n````\ncurl -X GET \"localhost:9200/_cat/indices/logs-*?v=true\u0026s=index\u0026pretty\"\ncurl -X GET \"localhost:9200/_cat/indices/metrics-*?v=true\u0026s=index\u0026pretty\"\n````\nOk, so now we now that we have a shitload of indices. Start slow and remove a previous month or so\n```\ncurl -X DELETE  \"localhost:9200/logs-2021.01*\"\ncurl -X DELETE  \"localhost:9200/metrics-2021.01*\"\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffrostmarked%2Fbonparent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffrostmarked%2Fbonparent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffrostmarked%2Fbonparent/lists"}