Wednesday, June 18, 2025

Design Uplift part VI - Using Tools , AI Co pilot in deisgn and development.

I am giving here complete details of my Linked in post related to using co pilot in generating code and making change quickly.

Ref:  


Design Objectives:
  • User requests / User actions like "like content"  "commentcontent" will be queued and processed.
  •  The path of processing will be different from the regular API request/ response used for application.
  • Objective is to achieve scalable processing architecture and also possible to prioritise processing   user requests and enabling realtime updates to clients .
  • Generating 90% of the code using co pilot and using other tools in design and development. 

Terminology Used: 

Signal R: SignalR supports "server push" functionality, in which server code can call out to client code in the browser using Remote Procedure Calls (RPC), rather than the request-response model common on the web

GitHub Co pilot: GitHub Copilot is an AI-powered tool designed to assist developers throughout the software development lifecycle. It provides contextualized assistance, including code completions, chat assistance, code explanations, and answers to documentation queries

Azure Service Bus:  Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics

BFF PatternThe Backend for Frontend (BFF) Pattern addresses the challenge of serving diverse client applications by creating a dedicated backend for each type of client—such as web, mobile, or IoT. Instead of a single backend trying to cater to all, BFF allows tailored APIs that optimize performance, reduce client-side complexity, and improve the user experience. This architecture ensures each client receives only the data and functionality it needs, enhancing efficiency and scalability across platforms.

Use case: 

User actions on web interface processed at back end ,  The user actions here are  like commenting , or like  or just view content .  Regular web requests will fetch data and display data and some actions will be saved to DB. In this example  the actions can processed using a totally decoupled system where the request priority can be managed,  should be able to process huge volumes of request . These User actions can be processed with some delay if processing time is more and some user actions needs Server to Client notifications.


Initial idea is to use copilot and generate as much code as possible.  It is possible ot provide uml as context to copilot. I have provideded diagram shown above as context and started chat with co pilot.

Initial commands are 
generate code . ( I could get all blocks code in c# even client generated in c#)

co piloot generated: BookMarkApi.cs, BFFBookmark.cs etc with methods and properties.

As I am expecting writing from BFF to bookmark api though servie bus and also Need angular UI which can listen to notifications from server. I could add more chat commands
to generate BFFBookmark class as graphql end point with graphql server and passing data from bff to back end using service bus. Co pilot also able to generate test client to listen to the notificaiton from srever using signal R.

In my second learning experience, I could generate lot of code which required to integrate above feature to my product.

How I could integrate above code to my project:

talash.azurewebsite.com is my product where images are displayed on home page.  Here is steps I have done to use above generated code to my web site and on integration there are lot of runtime / compilation error got resolved using git hub copilot. 

  1. Azure portal 
  • Objective: create resource for service bus. app services will host front end and back and and fucntion app to process messages on service bus. 
    1. Create service bus in azure and queue. 
    2. Create function app which has trigger to ready service bus.
    3. signal r uses websockets, make sure server api configuration websockets are enabled.
    4. Web app and Api already deployed to azure as app services.\
  1. UI changes:  
Objective: UI will display likes for content and also will show real time updates from other users. Here image on talash.azurewebsites.com is one examples above infrastructure common to other apps using Talash platform.
    1. Add like button at images on the front page.


    1.  
  1. BFF changes: 
Objective: It is BFF layer which can consume all calls from front end and updating user actions like likes is an mutation operation. which basically writes back to service bus where other services can pick up data and process.

  Loosly coupled system design basically to address queueing and able to process request at different priorities and also can handle high input requests without increasing hardware proportionally with number of user requests. It is a common infrastructure for the apps using Talash back end services.so integration to the system will be more generic dont tie up with any of the apps .

    1. Add mutation UpdaeUserAction in Bff  api code.
    2. Data on service bus will have application id , so the like could be for any data from other clients like talash video app or talash pdf drive. (http://talashlogs.z26.web.core.windows.net/)  so the infrastructure is to process for all above app which are build on Talash api. 
    3. This feature can work on any web apps which are using BFF.
    4.  BFF code write to the service bus. 
    5. Here Strategy pattern used for prioritizing processing of queue content and also some additional processing required for analytics and sales data.
  1.  Function app
    1.  Function app with trigger to read data from service bus and write to MongoDB.
  2. Api with signal R Hub.
Objective :  To enable real time updates to all the users browsing Conent signal R used. The system can be further improved with azure cache using Radis as in-memory db instead of directly reading and writing to DB. This layer will be further improved with a better cache design. 
    1. Changes to like count will be send back to client using signal R.
    2. so all web apps which are Browsing this content will show same like count. still query is cached at graphql server level so any new web client will get data from cached query not from mongo db. Even though the images served same with in the given cache expiry time, still the like count will be shown real time.
    3. Only small mismatch will be if  web app opened after like action with the existing query count. That can be rectified by cache invalidations. But otherwise real time updates achieved at the same time mongo db not hit with in the cache expiry time. This solution can be further tweaked.
    4. Add show error details in verbose while creating signal R service on server which helps to understand if any issue connecting to Hub. 
    5. Design of cache: above data integrity issue can be addressed with azure cache / radis  where writing and reading happens from cache.
  1. Testing: 
Objective: apps using above feature should show real time notifications .
    1.  Above website will have image data which is two way binding to home page.
    2.  New updated likes will be  shown to all client which are browsing  home page.
    3. All unit tests are generated using copilot.
  1. Debugging and trouble shooting
         Here is one query error  which took some time to search  and but co pilot given me quick answer with accurate solution for this  mutation query error. In this case just one space is wrong in the mutation query.

Here is chat with the co pilot. 




i will share git repo here after masking few passwords.  Above steps are completed using the code generated from copilot and it is really useful. Next we can discuss more details of using these tools in more effective way.

Testing , code reviews and code refracturing also can be done effectively with copilot. Documentation is best feature I found with co pilot .  Copilot can explain existing code and also if required it can do generate documents. "/Explain" will give us the every thing about code  and logic used to build. so going forward "/Explaing"  "/Fix" "/Test" "/Docuement"  these may become quite popular terminology among dev community.


AI Search & Prompts:


As part of app modernization the major feature implemented or recommended are depending on AI capabilities.  Searching and querying the huge number fo documents based on vectorized documents DB or converting  plain language  to sql or voice to query related features . Now a days it is quick and easy to generate related code using the AI development studios available with vendors like Microsoft , Amazon and google  and others.


Entities. DB and Analytics


In one of the AI based discussion  the query is about  can Co pilot generate adhoc reports from DB?  As part of "Vision" feature in co pilot it is possible to generate Sql from entity diagram and the sql generated is quite good.  could be generating reports out of sql will be big step and could be very useful for other professionals in IT like business analyst and Quality engineers and managers.  this one is one space to watch out in future, where it can help to quickly get summary reports and how far this new features  can be used reliable way.

Build and deploy Use cases:


Here is an example for using copilot with terraform modules.

please write the terraform config for a lambda function
please allow the lambda function to access sns topics and sqs queues
write me a list of variables to use

Over all co pilot kind of tools covering most of the development , testing and Dev ops space and helping to improve productivity by more than 30 to 40 percent. Initial impression for me is just it is doing some code refracturing.  But now it is much improved tool and we can expect more features in near future.










 






Friday, June 13, 2025

Design uplift seris -V Scalability, high availability and Modularity.

 Most of the examples we saw are Client server interaction in sync or async way so that each request will be processed and result displayed .  UI or stored to DB. When client requests are high volume and need more processing time we need to design few systems where requests can be queued and processed.  Which can handle scalability interms of software scalability not hardware scalability. 

  • Here is my first demo  component diagram which can introduce queue system and can buffer requests bfeore they can be further processed. Not very accurate interms of component representation but ideas is add requests to service bus and then save to Mongo db and also if required inform clients for real time notifications.
  •  requests are quees  and processed based on priority.
  • No request will be lost even if the hardware wont support the huge number of incoming requests.
  • So data integrity maintained.



In this example queue processing based on priority and other components to scale up the system and also processing some requests based on priority can be achived. .




Above queing / events / service bus and all components will help to improve the scalability aspects
of  the sytem. Here are more dimensions to this one




After all discussion the following could be complex high available modern application design with all components.
We can get more explantion of all blocks from Microsoft or Aws  web sites . Baseline highly available zone-redundant app services web application - Azure Architecture Center | Microsoft Learn




Sunday, April 27, 2025

Design uplift seris- Part 4 BFF design ( in progress)

 

Part 1 BFF design article explains  what are the main advantages of Graphql for implementing BFF and important scenarios where Graphql can be used. Other aspects like security and common anti design patterns and  how to over come them. Articles don't discuss  about in-depth explanation of how to design schema ,query , types and mutations  .

  Right now selecting a particular graphql server mostly depend on the programming language used  for application development., there are few vendors having solution and can be adopted without much comparison              of   alternative deigns. The choice  mostly   linked to the programming language being used             in     application/ product. But this will change as BFF is very much part of best design patterns    now             like api layer design or repository pattern , slowly BFF layer become part of most of new design / architecture diagrams . which indicates that better tooling and handling complex scenarios needs better choice interms of tooling along with choice of Graphql server. So the article will explain total end to end system design not just setting up Graphql server. Newly developed solution for graphql server like Yogagraphql already has more than 40k downloads. It shows that there will be more design alternatives going forward.

          If we look at some of the security  risks  discussed  in Book                "Black Hat GraphQL"  it also give                examples how                   graphql servers by few vendors providing additional security.             So there are many parameters along with setting up graphql             server. Security, schema generation, managing sub graph     -            and moving towards federation graphql .  All these parameters         helps to     design better grapql eco system. I am using apollo             client  which can cache client side data if the query is not                 changed. These     are few      features by these vendors some             times can  influence  whole     design.


 Part 2 in this article we can look at the available design options and the suitable use case will be discussed. It is important to understand in depth capabilities of the existing frameworks which can help to quickly bring Graphql server and Graphql client and can also add lot of features like security , build , logging  , scalability and other components as plug and play ,  as these  frameworks  are designed with lot of add on components and easy to integrate using API  .

Design Objectives

Let us consider few important use cases where Graphql being used extensively. One of the design discussion is about all the objects in graphql are connected like graph and also query represents closely with the data design.

 


There are many articles  discussing above  inherent feature of Graphql and advantages and disadvantages. As one article explains it as schema is graph and queries results in tree only. The articles also extending advantage of using graphql for graph based data bases. It also explains with good examples and how to validate and maintain data quality so that it works with graphql better way.

One way to look at designing Graphql server with Neo4j or similar frameworks are  more close to data driven design . There could be many variations to above.

There few advanced graphql servers they can help to expose DTO objects as graphql end points.  Hasura is one such grapqhql implementation. Similar thinking and design but Hasura will talk more about DTO object instead of data base type.

Let us look at AWS  design for Graphql server  with the  APP Sync based design.


 

As explained by AWS  the three subgraphs are composed into the Fusion gateway which is ran as a Fargate service behind an Application Load Balancer.

Above system developed by aws brings in the following advantages as it was shipping graphql server with lot of features already available and which can be extended based on application design. Here are few advantages of using above

Real-time data synchronization using subscriptions. Integration with AWS services like DynamoDB, Lambda, and S3.Simplified authorization with Amazon Cognito, API keys, and IAM.

Api  load balancing  and  security capability of IAM and also can make use of AWS components quickly.  Dev focus could be   implementing  schema , types and query files. This kind of design approach where lot of components can be plugged in can reduce development time . we can discuss more about  some of the major features which can be more attractive in selecting the design  when we try to compare different designs available for a real time example. 

Hot Chocolate: Hot Chocolate is a GraphQL platform for that can help you build a GraphQL layer over your existing and new infrastructure.

More precisely Hot Chocolate is an open-source GraphQL server for the Microsoft .NET platform that is compliant with the newest GraphQL October 2021 spec + Drafts, which makes Hot Chocolate compatible to all GraphQL compliant clients like Strawberry ShakeRelayApollo Client, and various other GraphQL clients and tools.  As it is more Microsoft platform based design and can work closely with Microsoft infrastructure.

Express Graph ql, Appollo Graphql server and server other   popular frameworks available to implement grpahql server. We can dive deep when we try to design couple of  real time example after discussing some more important design patterns and frameworks.

Design Patterns and Use cases

Let us consider scenario where an monolithic application having MVC pattern which was adopted by many legacy systems. When it is required to redesign above systems based on latest micro service based cloud enabled applications , new patterns like BFF, serverless and other  technologies are recommended for new application design. 

Use case 1: Refracting legacy Application:

In a simple design if all controller end points are mapped to an graphql end point and Client which is view  decoupled form MVC pattern it is one way to break monolithic application. This one is most adopted  approach in industry  I can see this approach is  best use case in industry for introducing BFF layer and graphql.

                   Let us  look at complete set of components and process                        involved in adopting Graphql

Design Graphql Server which also involve designing schema, query, types     and  resolvers to fetch data.

Designing some cache components  and optimizing data fetching using          pagination  and error handling, Client side design to interact with grapqhql  server.

Next will be API management related aspects like versioning and load    balancing.

Tooling : Here comes the major design decision as lot of vendors gives readily available schema generators, Visual designers for schemas ,query and  types  design,  testing frameworks  and every thing required to set up Graphql server  and to perform above 4 step with attractive  low code no code kind of  approach. So all  pending work could be   just hooking resolvers with data end points.    

The above refactoring and redesign exercise  can be extended to address lot of design considerations like separation of concerns  or abstracting   data 

We can discuss in depth scenarios for above design transition form MVC to Graphql server and then we can consider different platforms available which can be best fit for the above refracturing or tech refresh exercise.

Use case 2 API Aggregation : Aggregating multiple apis internal and./or external and abstracting api details from Clients.

Use case 3 BFF Design: Building more modular patterns in application design using BFF.

User case 4 Using Graphql along with Micro srvices : Graph QL for managing microservices and service mesh.

Here is one example of robust architecture for use case 4 using applo federation and microservices with an Api gateway 

Ref: https://talashlogs.blob.core.windows.net/talash-drive/leveraging-graphql-for-next-generation-api-platforms.pdf

Important design aspects for this reference architecture are : 

Appollo provides lot of ready to use components along with Graphql server. As discussed above it is like a platform which can give lot of other components to maintain Graphql infrastructure and quickly build application. As shown in this diagram Apollo provides a GraphQL developer platform (GraphOS), which includes developer tooling, a schema registry, and a supergraph CI/CD pipeline and high-performance supergraph runtime.

Even Appollo clients provided can manage cache control at client side . Cache  management is bit tricky and needs  more components to maintain server cache and client cache which we can discuss later.

Other components in diagram are Kong API gateway . This could be any other API manager  . Important point here is Micro services needs service mesh to communicate and then they are connected to Graphql server. Bit of load balancing and security achieved with these API gateways embedded in the design. 

Security is often handled with a defense-in-depth or zero-trust approach, where each layer of the stack provides security controls for authentication, authorization, and blocking

malicious requests. Client-side traffic shaping with rate limits, timeouts, and compression can be implemented in the API gateway or supergraph layer, and subgraph traffic shaping

(including deduplication) can be implemented at the supergraph layer. Observability via Open Telemetry is supported across the stack to provide complete end-to-end visibility into each

request via distributed tracing along with metrics and logs. 

 

(Apollo Router).


Here is another way to classify above use cases.- 

 

Pattern

Advantages

Challenges

Best Use Cases

Client-Based GraphQL

Easy to implement, cost-effective

Performance bottlenecks, limited scalability

Prototyping, small-scale applications

GraphQL with BFF

Optimized for clients, better performance

Increased effort, higher complexity

 Applications with diverse client needs

Monolithic GraphQL

Centralized management, consistent API

Single point of failure, scaling issues

Medium-sized applications, unified schema

GraphQL Federation

Scalable, modular, team autonomy

Increased complexity, higher learning curve

Large-scale, distributed systems


Best practices

What are the key principles of schema design in GraphQL?

Designing a GraphQL schema with flexibility and scalability in mind involves several key principles that ensure the schema can grow and adapt to changing requirements while maintaining performance and usability.
  • Unified Schema: A GraphQL schema defines a collection of types and their relationships in a unified manner. This allows client developers to request specific subsets of data with optimized queries, enhancing flexibility.
  • Implementation-Agnostic: The schema is not responsible for defining data sources or storage methods, making it adaptable to various backend implementations without requiring changes to the schema itself.
  • Field Nullability: By default, fields can return null, but non-nullable fields can be specified using an exclamation mark (!). This provides control over data integrity and error handling, contributing to robust and scalable schema design.
  • Query-Driven Design: The schema should be designed based on client needs rather than backend data structures. This approach ensures that the schema evolves with client requirements, supporting flexibility and scalability.
  • Version Control and Change Management: Maintaining the schema in version control allows tracking of changes over time. Most additive changes are backward compatible, but careful management of breaking changes is essential for scalability.
  • Use of Descriptions: Incorporating Markdown-enabled documentation strings (descriptions) in the schema helps developers understand and use the schema effectively, promoting a flexible development environment.
  • Naming Conventions: Consistent naming conventions, such as camelCase for fields and PascalCase for types, ensure clarity and ease of use across different client implementations, aiding in scalability

Tools and devops 


References:

Apollo graphql federation:

 https://talashlogs.blob.core.windows.net/talash-drive/Apollo-graphql-at-enterprise-scale-final.pdf

Security and pen test

https://talashlogs.blob.core.windows.net/talash-drive/Black+Hat+GraphQL_bibis.ir.pdf

PentestingEverything/API Pentesting/GraphQL at main · m14r41/PentestingEverything

Design patterns

https://drive.google.com/viewerng/viewer?url=https://talashlogs.blob.core.windows.net/talash-drive/API+Composition+Pattern+with+GraphQL.pdf

Naming and best practices and schema design

GraphQL: Standards and Best Practices | by Andrii Andriiets | Medium

GraphQL Best Practices for Efficient APIs

Saturday, February 1, 2025

Design uplift seris - Part 3 Async , Batch, Threads and Serverless infrastructure.

 In this articles we will go though Async programming and request response from third party api and aggregate data and send it back to service. Here is one example where we get response from api and this will be executed around 200 times per one client request. So it is important to see how it is possible to achieve response for this client request with in 5 sec. time.  

One way is to span 200 threads and get response for this that will be quick. Let us see how many other options which could be memory optimized and time optimized and how these kind of batch processing can be hosted  using latest available technologies. 

 private async Task<List<APIResponse>> GetData(List<string> names)
 {
     List<APIResponse> data = new List<APIResponse>();

     try
     {
         string param = "";
         foreach (string name in names)
         {
      
         HttpResponseMessage response =
             await client.GetAsync("https://api.genderize.io?" + param +
                                   "&apikey=<APIKEY>");
         var content = await response.Content.ReadAsStringAsync();
         dynamic viewdata = JsonConvert.DeserializeObject<dynamic>(content);
         foreach (var d in viewdata)
         {
             var value = new APIResponse();
             value.name = d.name;
             value.gender = d.gender;
             data.Add(value);
         }
         //var returnValue = JsonConvert.DeserializeObject<APIResponse>(content);
     }
    
     return data;
 }

Few important design principles  for above problem could be 

 

1. Above function should be executed in parallel and should not be an blocking synchronous call,.Async should be used . 

2. \Number of threads as given about are 200,  it is not good design to loop though N number of times and create N number of threads. Most of the OS will have limitation on spawning resource either it is threads or file handlers. So above parallel execution should be divided into batches and combine the result of each batch and return response. 

3. The complete processing can be done on a independent infra like serverless apps or lamda as it is not using any data from the application. which can help to scale up the corresponding hardware if required. 

 Design Patterns


Worker pool/Job queue Pattern: The worker pool pattern is simple and most widely used concurrency pattern for distributing multiple jobs or patterns to multiple workers.



In above image jobs are stored in a data structure say Job queue and pool of worker threads which will get job based on scheduler. If we can access multiple cores it is possible to process them parallel like Golang. 

Monitor Pattern: n number of threads waiting on some condition to be true , if the condition is not true those threads need to be in sleep state and pushed to wait queue, and they have to be notified when condition is true.

Double Checked locking : for creating concurrent objects (ex: singleton pattern)

Barrier Pattern: all concurrently executing threads must wait for others to complete and wait at a point called Barrier

Reactor Pattern: In an event driven system, a service handler accepts events  from multiple incoming requests and demultiplexes to respective non blocking handlers.

Let us look at few solutions to execute above function and get response.

 

var queryTask = new List<Task>();

for (int i = 0; i < 150; i++) {

      queryTask.Add(da.ExecuteSPAsync("Async" + i.ToString()));

}

Task.WhenAll(queryTask).Wait();                     

Parallel.For(0, 150, new ParallelOptions { MaxDegreeOfParallelism = 5 },

              x => da.ExecuteSP("PPWith5Threads" + x.ToString())); 

 Here is code samples to use parallel programming using c# supported library.  Threads vs Tasks | C# Online Compiler | .NET Fiddle  


Here is basic solution where it can create  thread and there is mechanism in c# to control number of threads at a time can be created. which is fair enough and we can fine tune maxdegreeofparallelism according to resource and response time required.  This  concept is thread pooling and available in spring batch settings and other programming language as well. 

Here is one configuration used in a spring batch job

·            core-pool-size: 20   max-pool-size: 20

·            throttle-limit: 10

 Here is example from Phython for doing similar task i.e  make multiple requests simultaneously, use asyncio.gather: 

async def fetch_multiple():

    urls = [

        "https://api.github.com/users/github",

        "https://api.github.com/users/python",

        "https://api.github.com/users/django"

    ]

    async with aiohttp.ClientSession() as session:

        tasks = []

        for url in urls:

            tasks.append(asyncio.create_task(fetch_data(url)))

        results = await asyncio.gather(*tasks)

        return results

How to Measure improvement in processing 

Tasks and the event loop

Consider this example: Grandmaster Judith Polgar is at a chess convention. She plays against 24 amateur chess players. To make a move it takes her 5 seconds. The opponents need 55 seconds to make their move. A game ends at roughly 60 moves or 30 moves from each side. (Source: https://github.com/fillwithjoy1/async_io_for_beginners)

Synchronous version

import time

def play_game(player_name):
    for move in range(30):
        time.sleep(5) # the champion takes 5 seconds to make a move
        print(f"{player_name} made move {move+1}")
        time.sleep(55) # the opponent takes 55 seconds to make a move

if __name__ == "__main__":
    players = ['Judith'] + [f'Amateur{i+1}' for i in range(24)]
    for player in players:
        play_game(player)

Asynchronous version

import asyncio

async def play_game(player_name):
    for move in range(30):
        await asyncio.sleep(5) # the champion takes 5 seconds to make a move
        print(f"{player_name} made move {move+1}")
        await asyncio.sleep(55) # the opponent takes 55 seconds to make a move

async def play_all_games(players):
    tasks = [`asyncio.create_task`(play_game(player)) for player in players]
    await `asyncio.gather`(*tasks)

if __name__ == "__main__":
    players = ['Judith'] + [f'Amateur{i+1}' for i in range(24)]
    asyncio.run(play_all_games(players))

In the synchronous version, the program will run sequentially, playing one game after another. Therefore, it will take a total of 24 * 60 * 60 = 86,400 seconds (or 1 day) to complete all the games.

In the asynchronous version, the program will run concurrently, allowing multiple games to be played at the same time. Therefore, it will take approximately 60 * 5 = 300 seconds (or 5 minutes) to complete all the games, assuming that there are enough resources available to handle all the concurrent games.

 


Design Uplift part VI - Using Tools , AI Co pilot in deisgn and development.

I am giving here complete details of my Linked in post related to using co pilot in generating code and making change quickly. Ref:   Desi...