Sophie Daily Report

Apr 28, 2026  ·  13 active orgs  ·  70 test accounts excluded

Alerts

1. Organizations Overview

CompanyAdminWhatsApp AudiencesCampaign CreatedCandidatesActive Interviewing NowCompleted Last ActiveLast Action
GrowthStageNihir VeddNot connectedNoNoApr 24, 16:27Registered
TalentlyMartín de la florConnected1Yes42Yes
7
  • Jesus Miguel Agamez Caraballo
  • Luis Fernando Sanchez Agostini
  • Stalyn Alexander Nogales Shagñay
  • Alex Montes
  • Rene Arias
  • Andres Felipe Diaz Gonzalez
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Andres Felipe Diaz Gonzalez: I'm ready
  • Christian Sebastian Pinos Mendez
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Christian Sebastian Pinos Mendez: In my previous role, I worked on a chatbot-based appointment system using TypeScript, Next.js, and Node.js (NestJS) integrated with WhatsApp. The main challenge was converting natural language messages into real actions (booking, rescheduling, canceling). I solved this by adding a middleware layer that structured the AI responses into validated DTOs before hitting the backend, ensuring consistency and avoiding errors. I also implemented transaction handling in the database to prevent double bookings. This resulted in a more natural user experience while maintaining strong data integrity and system reliability.
13
  • Kevin Abel Brito Díaz strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Kevin Abel Brito Díaz: One production-grade project that comes to mind was a large e-commerce initiative for Paris CL (Cencosud), where I worked as part of the engineering team delivering scalable customer-facing features and backend integrations. The frontend stack was based on React, and the team selected this ecosystem because most developers already had strong experience with it, which accelerated delivery speed and reduced onboarding friction. We also benefited from the maturity of the ecosystem and strong community support. On the backend, we chose NestJS over Express because the project required a more structured and scalable architecture. NestJS provided clear module separation, dependency injection, decorators, and better maintainability for a growing codebase with multiple contributors. This was especially valuable as the platform evolved and new services were added. One of the main technical challenges was state consistency and integration between frontend flows and backend services, particularly in e-commerce processes where pricing, promotions, and inventory needed to stay synchronized. We addressed this by keeping frontend state predictable and centralizing business logic in backend services instead of duplicating rules on the client side. From an engineering perspective, NestJS helped standardize patterns across the team, improve code organization, and simplify testing. The final outcome was faster collaboration, easier maintenance, and a platform architecture better prepared for future growth.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Kevin Abel Brito Díaz: I’ve worked with AWS services in environments where serverless components were used to support integrations, file processing, and event-driven workflows. A representative example involved handling file uploads and automated processing flows using Amazon S3 as the storage layer and Lambda functions to trigger validations and downstream processing after new files were uploaded. One important challenge was scalability during peak usage periods, where multiple files could be uploaded simultaneously. The advantage of Lambda was the ability to scale automatically based on incoming events, which reduced the need to manage dedicated infrastructure. Regarding cold starts, we minimized impact by keeping functions lightweight, reducing unnecessary dependencies, and designing logic to execute efficiently. For time-sensitive flows, we also paid attention to runtime selection and package size. From a security standpoint, we followed least-privilege principles with IAM roles, ensuring each Lambda function only had access to the exact S3 buckets or AWS resources required. For deployment, we used structured CI/CD workflows with version-controlled environments, which helped keep releases consistent and traceable. For monitoring, we relied on centralized logs and execution metrics to track failures, duration, and retries, allowing quick troubleshooting and continuous optimization. Overall, my focus in these architectures has always been reliability, maintainability, and operational efficiency.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Kevin Abel Brito Díaz: One example that comes to mind was during an e-commerce project where users were experiencing friction during checkout because some product availability and promotional data were not updating fast enough in the purchase flow. From a technical perspective, the “ideal” solution was to build a more complete real-time distributed architecture with multiple synchronized services and deeper event-driven communication. However, that approach required more implementation time and added unnecessary complexity for the immediate business need. The necessary user-focused solution was to simplify the architecture and prioritize response speed and consistency in the checkout experience. We reworked critical API flows, optimized caching rules, and centralized some business validations server-side to reduce mismatches between frontend data and backend responses. The result was a smoother checkout process, fewer cart abandonment issues related to pricing or stock inconsistencies, and faster page interactions during peak traffic periods. We measured success through lower incident reports, improved response times, and better conversion behavior in the affected checkout steps. That experience reinforced an important principle for me: elegant architecture matters, but solving the user’s pain point at the right time matters more.
  • MARCOS LEBRON strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    MARCOS LEBRON: One production-grade project I worked on was the modernization of a healthcare prescription platform. The existing system had a legacy frontend and backend, and the goal was to move to a more scalable architecture with a modern frontend, stronger api contracts, better validation, and a foundation that could support multi-tenant usage for doctors, clinics, pharmacies, and administrators. One of the main technical challenges was managing the prescription flow. it was not just a simple form. The user had to select a patient, choose medications, validate prescription rules, attach doctor information, generate a pdf and preserve enough state so the experience felt fast and reliable. I treated that flow almost like a prescription session where the frontend state represented the current draft while the backend remained the source of truth. for State management, I avoided putting everything into a global store. Instead, I separated the state into different layers. Local ui state stayed inside components, form state was handled closer to the form layer and server state was handled through api driven data fetching patterns. The idea was to avoid a giant global state object that becomes hard to maintain. For shared domain state, like the selected tenant or authenticated user context, I kept that at a higher level because many modules needed access to it. On the rendering side, Next.js was useful because not every page needed the same strategy. Public or semi-static content could benefit from server rendering or static generation, while authenticated areas were more dynamic and API-driven. For example, dashboard and prescription flows needed fresh data and authorization checks, while informational pages or configuration-driven views could be optimized differently. Another feature related to the backend was BFF for different microfrontend applications that I needed to structure. For backend architecture, if I had to choose between NestJS and Express in a similar production-grade environment, I would choose based on the size and complexity. Express is great when you need something lightweight. But if the API has multiple modules, validation rules, authentication, role-based access, and many developers working on it, I prefer NestJS because it gives you a stronger structure through modules, dependency injection, DTOs, guards, and interceptors.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    MARCOS LEBRON: I have experience working with AWS services like S3 and ECS, and also using Lambda and s3 for asynchronous processing. I implemented a file processiong pipeline. The user uploaded the file to s3 and then that s3 event triggered a lambda function. The lambda then validates the file, extracts metadata, transform it. the reason I like this aproach is that it keeps heavy work out of the main api requests. Instead of making the user wait while the backend processes the file, the api can upload a presigned url, store the file in s3 and let Lambda handle the processing in the background. For a cold start, I keep the lambda package small and avoid unnecessary dependencies. I prefer node or python runtimes is faster compared to java or c#. For security i applied least privilege if a function only reads from one S3 bucket, the policy should be scoped to that specific ARN nothing bigger than that, use aws secrets manager for credentials. for deployment i prefer to use CI CD and IAC, that could be SAM, serverless framework, or Terraform. The important thing is that lambda, events, IAM roles and env variables are defined in code and deployed constantly. for monitoring i use cloudwatch for logs and metrics, set alarms and deade letter queue so events are not lost.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    MARCOS LEBRON: A good example was a time at epam where i worked on a large enterprise frontend ecosystem with React, typescript, microfrontend, shared ui libraries, and bff layer. the user pain point was coming from both developers and product teams. we had around 25 applications, and teams were struggling with inconsistent frontend patterns, duplicated logic, slow local builds and repeated APO transformations logic across different microfrontends. So features took longer to deliver small ui changes required more coordination than they should, and onboarding new devs was slow, The ideal technical solution would probably have been to redesign the whole platform from scratch and make every application follow the same perfect architecture. But that was not realistic because these were production apps, teams were actively delivering features, and we could not pause the business to do a full rewrite. The difficult part was balancing technical idealism with what users actually needed. As engineers, we often want the cleanest possible architecture, but the users need faster builds, more consistency, and less friction immediately. So we chose an incremental migration instead of a risky full rewrite. so we made a more pragmatic architectural decision. Instead of trying to fix everything at once, we focused on the biggest pain points. We migrated the build system from Webpack to Vite, improved the shared UI libraries, and used the BFF layer to simplify how the frontend consumed backend data. We measured success with both technical and practical signals. The biggest concrete metric was that the Vite migration reduced build times by more than 50%. We also saw better developer experience, faster local feedback loops, more reuse from shared packages, and cleaner integration through the BFF. The takeaway for me was that architecture should solve real pain. In this case, the right decision was not to chase a perfect rewrite, but to make targeted architectural changes that improved delivery speed and system consistency while keeping production stable.
  • Andres Rocha
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Andres Rocha: Sure, one project that comes to mind is an investment design platform I worked on, where users could build and simulate capitalization tables in real time. Sounds straightforward until you factor in that every single input change needs to cascade instantly — dilution percentages, valuations, equity breakdowns — all live, all consistent across multiple investor profiles simultaneously. That was the real challenge. The core problem was state management. Cap tables are inherently interdependent, so a change in one cell isn't isolated — it triggers a chain of recalculations. We couldn't let the UI ever show a half-updated state, and we couldn't afford race conditions during simulation runs either. The solution was splitting the workload: heavy financial calculations went through RabbitMQ as background jobs, while the UI stayed responsive through optimistic updates and WebSocket-pushed confirmations. That separation of concerns was what kept the system from becoming a nightmare to reason about. On the backend, we evaluated Express versus NestJS, and the arguments for NestJS were pretty clear once we mapped out the requirements. Express gives you freedom, but on a financial platform with a larger team, that freedom tends to become inconsistency over time — everyone structures their modules differently, dependency management gets messy, and testing becomes painful. NestJS forces a modular, DI-based architecture from day one. That structure was worth more than any flexibility we were giving up. The TypeScript integration was another big factor. We were using strictly typed DTOs for every calculation service, and that caught a surprising number of potential bugs before they ever hit runtime. For financial logic, that's not a nice-to-have — it's a requirement. We also got clean patterns for the RabbitMQ microservice integration and WebSocket handling out of the box, rather than wiring all of that manually. In practice, we reduced boilerplate by around 30%, test coverage improved significantly, and the frontend team could mock services without needing to coordinate with us constantly — which made a real difference in sprint velocity. On the frontend, we used Next.js with ISR for the dashboard layer. Investment templates don't change by the second — they're read-heavy but do get updated periodically. ISR was the right fit: pages are pre-rendered statically, but when a user pushes an update, it propagates globally without a full rebuild. We kept LCP under 1.5s even on views with dense data visualizations, which was one of our harder non-functional requirements to hit.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Andres Rocha: Most of my hands-on Lambda and S3 experience comes from my time at Konfio, where we ran a fairly large AWS-based architecture handling payment orders, KYC flows, and document processing across multiple microservices. A concrete case where scalability became a real concern was around document ingestion in the KYC platform. We had periods where a high volume of users were uploading identity documents simultaneously — images, PDFs, that kind of thing. The initial approach of routing uploads through the Lambda directly became a bottleneck fast: the function was handling both the HTTP layer and the S3 write, which under load created timeouts and inconsistent responses. The fix was straightforward once we identified it — we moved to presigned URLs, so the Lambda's only job was generating a short-lived, scoped URL that the client used to upload directly to S3. That decoupled the load from the function entirely and let S3 handle the ingestion at its own scale, which it does well. On cold starts — this was something we had to think about carefully for Lambdas that sat in critical paths, like the ones triggering downstream processes after a document was validated. Cold start latency in those cases was noticeable. The approach we settled on was Provisioned Concurrency for the functions that needed to respond fast, keeping a set number of instances warm at all times. For less critical functions, we used scheduled EventBridge rules as a lighter alternative — just enough to keep them from going completely cold during business hours. The tradeoff with Provisioned Concurrency is cost, so we were selective about where we applied it. For IAM, the rule we followed was strict least-privilege per function. Each Lambda had its own role scoped to exactly what it needed — if a function's job was to read from a specific S3 bucket, its role only allowed s3:GetObject on that bucket's ARN, nothing broader. This mattered especially in a fintech environment where a misconfigured or accidentally triggered function shouldn't be able to touch data it has no business accessing. We also used resource-based policies on the S3 side to add a second layer — so even if a role was misconfigured, the bucket itself would reject unauthorized actions. Deployment was managed through Terraform for infrastructure definitions and CodePipeline for the CI/CD side. Lambda configurations, IAM roles, S3 bucket policies — all version-controlled and applied through the pipeline, which made environment parity between staging and production much easier to maintain. For monitoring, CloudWatch was the primary tool — we had log groups per function, metric filters for error rates and duration, and dashboards aggregating the most critical functions. For anything that needed deeper tracing across services, we used X-Ray to follow a request through the chain and identify where latency or failures were actually happening.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Andres Rocha: This one comes directly from the KYC platform at Konfio, and it's a good example of how skipping proper discovery with the actual end user can force a significant architectural rework down the line. The platform was originally built to support a document review flow for internal KYC analysts — the people responsible for validating loan applications. The initial design was straightforward: reviewers could see the most recent document uploaded by the applicant for each required field. That seemed reasonable at the time, but it turned out to be a critical gap. As the volume of applications grew, the team started flagging an unusual number of false rejections. Analysts were rejecting applications that should have been approved, and when we dug into the cases, a pattern emerged. Applicants would upload a document, it would get flagged for some reason — expired ID, wrong format, incomplete information — and they'd upload a new version. But the reviewer had no visibility into the previous submissions. They couldn't tell whether the issue had actually been corrected or whether the new document introduced a different problem. Without that context, decisions were being made on incomplete information. There was also a separate but related problem we uncovered during the same investigation. Sales reps, who were often sitting with applicants during the onboarding process, were sometimes uploading documents from their own accounts or pushing files through unofficial channels when something wasn't going through correctly. That was generating file inconsistencies on the platform side and bypassing the validation pipeline entirely. That category alone accounted for roughly 40% of rejection errors across all applications. The conflict was architectural. The system had been designed around a single-document-per-requirement model — there was no concept of document history or versioning at the data level. To fix this properly, we needed to restructure how documents were associated with applications: instead of a flat reference to the latest file, we needed a versioned relationship between each document type and all its submissions across the lifecycle of a request. That change wasn't isolated — it touched the storage layer, the review UI, the validation logic, and several downstream processes that consumed document state. The decision to do it properly rather than patch around it was driven by the KYC team themselves. Once we sat down with the analysts and mapped out how they actually worked through a review — what information they needed, what context they were missing — it became clear that a surface-level fix wouldn't hold. We needed to give them a tool that matched their actual workflow. We measured the outcome mostly through rejection error rates. False positives dropped significantly once reviewers had full document history available. The sales rep workaround problem was addressed through stricter upload attribution and validation at the point of submission, which closed that gap at the source. The broader result was a faster, more confident review process — fewer escalations, less back-and-forth with applicants, and a meaningful reduction in the time vendors were spending re-collecting documents that had already been submitted correctly.
  • Sergio Matamoros strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Sergio Matamoros: Developing the Ariel Scheduling System required a robust architecture using NestJS for a modular, type-safe backend and Next.js to leverage Incremental Static Regeneration for optimal performance. The core technical challenge involved managing real-time booking synchronization to prevent race conditions, which I solved by implementing optimistic updates via TanStack Query and WebSockets for immediate feedback and data integrity. This choice of a structured framework over a more minimal approach like Express allowed for seamless validation through DTOs and a scalable codebase that maintained high speed and reliability even as complex features like inventory management were integrated.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Sergio Matamoros: I have implemented serverless workflows using AWS Lambda and S3 to automate high-volume data processing, specifically for managing dynamic assets and inventory updates in a production environment. To address the challenge of cold starts, I optimized the execution environment by minimizing the deployment package size and utilizing Provisioned Concurrency for time-sensitive triggers, ensuring the system remained responsive during sudden traffic spikes. Scalability was managed through S3 event notifications that triggered Lambda functions asynchronously, while security was strictly enforced using the principle of least privilege with granular IAM roles, limiting function access only to the specific S3 prefixes and database rows required for the task. For deployment and monitoring, I utilized the AWS Serverless Application Model (SAM) alongside CloudWatch for real-time logging and performance metrics, allowing me to track execution duration and memory usage to ensure the workflow remained cost-effective and resilient under heavy loads.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Sergio Matamoros: While developing the Ariel Scheduling System, a significant user pain point emerged when clients in areas with unstable internet connectivity reported that the application would freeze or lose booking data during brief signal drops, leading to a "necessary" pivot toward a robust offline-first synchronization strategy. The "ideal" technical solution from a developer's perspective was a strictly server-authoritative model that ensured absolute data consistency by requiring a constant connection, but the "necessary" user solution demanded that the interface remain interactive even when offline. This forced an architectural shift to implement a local-first state using IndexedDB through TanStack Query's persistent cache, allowing users to continue the booking flow while the application queued requests locally. The conflict was resolved by creating a synchronization engine that handled background reconciliation once the connection was restored, prioritizing the user's intent over immediate server confirmation. Success was measured by a 35% decrease in abandoned booking sessions and a significant drop in support tickets related to "lost data," proving that sacrificing a purely synchronous architecture for a more resilient, client-side caching model was essential for real-world usability in the technician's specific operating environment.
  • Sebastián Josué Montero Madrigal strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Sebastián Josué Montero Madrigal: while my strongest production experience has been backend-focused, i've been applying full stack principles using TypeScript, React, and Node.js concepts i built a REST API whit ASP.NET Core for managing a motorcycle inventory system, using a layered architecture with controllers, services, and data access A key challenge was maintaining scalability and clean separation of concerns, which i solved by isolating business logic in a service layer and using dependency injection this is why i would choose NestJS over Express in Node.js, since it enforces a similar structured approach On the frontend side I've worked whit React focusing on state managmente and component design, ensuring efficient data flow and avoiding unnecessary re-renders Regarding Next.js i understand trade-offs like SSR vs ISR for frequently changing data, such as inventory status i would use SSR for consistency while ISR would be useful for less dynamic content to improve preformance overall i bring strong backend fundamentals and a growing practiacl undestanding of full stack development especially around architecture and data flow
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Sebastián Josué Montero Madrigal: i haven't yet implemented a full production workflow using AWS lambda and S3, but i have studied and practiced serverless architecture concepts and how they apply to scalable systems from my backend experience building REST APIs, i understand how to desing stateless services, which translates well to Lambda functions, for example in a system like my motorcycle inventory API, i would use Lambda to handle event-driven tasks such as processing updates or handling file uploads whit S3 used for storage regarding scalability one advantage of Lambda is it automatic scaling but im aware of challenges like cold starts to mitigate this i would minimize function size, reduce dependencies and consider strategies like keeping functions warm for latency-sensitive operations for security i undestand the importance of IAM roles and least-privilege access, each Lambda function should only have permissions strictly necessary, for example limited access to specific S3 buckets or actions for deplayment and monitoring i'm familiar with tools like AWS SAM or Serverless Framework for structured deployments, and CloudWatch for logging and performance monitoring, even though i'm still building hands-on experience with these tools, i have a solid understanding of how to apply them in real scenarios overall while my direct production experience whit AWS serverless is still growing, i have a strong foundation in backend architecture and i'm actively working on applying these concepts in could enviroments
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Sebastián Josué Montero Madrigal: in one of my backend projects i built a motorcycle inventory system where users needed to quickly view and update vehicle availability, initially i designed the system with a more "ideal" architecture-strict separation of layers normalized data structures, and multiple relational joins to ensure consistency and scalability however during testing i realized this approach introduced latency when retrieving data, especially for common user actions like listing available vehicles from a technical perspective, the design was clean and correct, but it didn't provide the responsiveness users needed the pivot was to optimize for the user experience rather than strict architectural purity, i introduced more efficient queries and simplified some data access patterns, even if that meant slightly relaxing normalization in certain areas this reduced response times significantly and made the system feel much more responsive the main conflict was betwwen maintraining a perfectly structured, theoretically ideal backend versus delivering fast, practical performance for end users, i chose to prioritize the user experience ehile still keeping the system maintainable to measure success i compared response times before and after the changes and validated that common operations became noticeably faster, additionally the system handled frequent queries more efficiently without affcting stability this experience reinforced the importance of balancing clean architecture with real world user needs, especially in performance-sensitive applications
  • José Torres strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    José Torres: For a recent high-demand project (called Project Windfall), I set up a monorepo with Turborepo using Next.js and NestJS. I chose the NestJS library over Express for the backend because, since it’s a system with complex business logic, I needed a modular and robust architecture that would facilitate dependency injection and long-term maintenance, avoiding the clutter that often arises in Express as the project scales. The biggest challenge was managing real-time state for thousands of concurrent users. To avoid compromising performance, I separated the logic: I used TanStack Query for server-side caching and Zustand for atomic global state. This, combined with a Redis layer on the backend and database index optimization, reduced response latency from 500 ms to less than 100 ms, enabling smooth updates without unnecessary re-renders. I used TypeScript to share data models between the backend and frontend within the monorepo, achieving end-to-end type safety, which virtually eliminated all integration errors. Ultimately, these decisions enabled the platform to handle critical traffic spikes while continuing to deliver a robust user experience. I employed the modular atomic design methodology to make the components more modular and scalable.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    José Torres: I've implemented serverless workflows using S3 as a trigger for AWS Lambda functions dedicated to file processing and data extraction. To handle scalability and prevent bottlenecks, I used Amazon SQS as a buffer, allowing the system to process load spikes asynchronously without hitting concurrency limits. Regarding cold starts, I optimized performance by reducing bundle sizes with esbuild and applying Provisioned Concurrency to business-critical processes. For security, I strictly follow the principle of least privilege using specific IAM Roles, limiting Lambda access to exact S3 buckets and prefixes. I manage deployment and monitoring through Infrastructure as Code (IaC) with the Serverless Framework integrated into GitHub Actions pipelines, while centralizing observability via CloudWatch and AWS X-Ray. This approach ensures an auditable, secure, and highly scalable environment that aligns perfectly with project operational costs.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    José Torres: In a project for an online casino, we faced a conflict between data consistency and perceived performance. The "ideal" technical solution was to validate every action directly with the server to ensure 100% balance accuracy, but this created frustrating latency for users with unstable connections. To address this pain point, we pivoted to an Optimistic UI architecture using Zustand and TanStack Query. This allowed the interface to react instantly (under 100ms) by predicting the success of an action on the client-side, while the actual synchronization happened in the background with a robust rollback system in case of server-side failure. We measured success through both product metrics and technical performance. We saw a 25% increase in session duration as the visual fluidity eliminated the friction that was causing user drop-off. Technically, we monitored the "rollback rate" to ensure our prediction logic remained accurate and optimized our Interaction to Next Paint (INP), moving from a critical state to "Good" in Core Web Vitals. This pivot proved that prioritizing immediate feedback over absolute synchronous consistency was essential for maintaining user engagement.
  • pablo guzman strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    pablo guzman: well the most complex project I have work was con travel tourist application I used nest js to configure the api service and I choose it because of bussiness requirement and because nest js offers structure and fast building and scalable structure (dependency injection, DTO´s, ), on the front end site we used next js because the main goal of the prject was to get the max score in SEO also it was the first time I implemented tanstack query and it help to abstract and hanlde better most of the code when dealing with API and cahe and navigation reducing stress on services (DB, http request)
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    pablo guzman: sure I have use AWS just once and most of experience for this kind of deployment is on vercel and railway, in both I had config the server turn off and on if there were no requests in certain period of time, I handle most of my deployment via github and sync it with the services, and the monitoring via the service dashboard
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    pablo guzman: there was a time when our team was asked to deploy a react native app to the ios and android store but it took more time than expected and time was upon us, so the team proposed to implement the app as a Progressive web app (PWA)instead so we got the approval and we were able to deliver this way and forget about ios and android issues for new apps
  • Johan Marcel Fernández Castillo strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Johan Marcel Fernández Castillo: Sure! I recently led a project where we built a high-performance platform using Next.js, TypeScript, and Tailwind CSS with shadcn/ui. The main challenge was balancing a rich UI with fast load times. I architected the solution using React Server Components (RSC) for the majority of the application. This significantly improved our SEO and reduced the JavaScript bundle size since most of the logic stayed on the server. I reserved Client Components strictly for interactive elements like complex forms or real-time filters. For the backed: Next.js API Routes Instead of a separate Express server, I leveraged Next.js API Routes. Why: It simplified the architecture and deployment. Since the frontend and backend share the same TypeScript definitions, I ensured full type safety across the entire stack. This reduced cold start latency and infrastructure costs. By using Server Actions, I eliminated the need for most 'useEffect' data fetching, making the data flow more predictable and much easier to maintain.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Johan Marcel Fernández Castillo: I’ve used AWS Lambda and S3 primarily to build event-driven, asynchronous workflows. I implemented a pipeline where users uploaded large assets to an S3 bucket. This triggered a Lambda function to process the files (optimization and metadata extraction) before updating our database. To handle spikes in traffic, I optimized the Lambda package size by keeping dependencies minimal, which significantly reduced cold start times. For latency-critical paths, I used Provisioned Concurrency. I followed the principle of 'least privilege.' Instead of broad permissions, I created specific IAM roles for the Lambda with access only to the necessary S3 prefixes and specific DynamoDB tables. I managed the infrastructure using AWS SAM to ensure reproducible environments. For monitoring, I relied on CloudWatch Logs and set up Alarms to notify the team of any execution failures or duration timeouts
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Johan Marcel Fernández Castillo: A good example was in an HOS (Hours of Service) system I worked on for a company in the sugar industry. The ideal technical solution was strict validation on every step driver license checks, freight direction validation, and route confirmation with separate API calls. Architecturally, it was clean, but users hated it because dispatchers needed speed, not extra friction. The pain point was that operators were losing time with repeated validations, unnecessary re-renders, and form resets. I changed the flow using React, TypeScript, Apollo Client, and redux-form so validations only happened when values actually changed for example, the driver license endpoint only ran if the license number was modified. I also optimized state handling to avoid unnecessary re-renders and duplicate requests. The backend team preferred stricter validation, but we prioritized the user experience while keeping critical checks. We measured success by faster form completion, fewer support complaints, and fewer repeated API calls. The best proof was that users stopped using manual workarounds and trusted the system more
  • Eyder Garcia medium fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Eyder Garcia: Well to start to use any of state management correctly I first clarify that the SSR is used for almost all components developments or pages because this is very good and respect best practices to reach out a better performance regardless data retrieved and SEO also its very useful to use ISR in order to keep editing pages or piece of codes without rebuilding the entire page or code. Regarding Backend Architecture well both depends or scalabily product and also how many teams collaborate with the project. I mean if it means about a product or project that not scale in such a big cases we can start with Monolitich Modular Pattern Architecture since its very clean and more understand for small-medium project and i use Express.js Now for Nest.js cases I’ll prefer using Microservice Pattern Architecture since the framework its very friendly and stable and provide many resource and its very accesible for this such of thing but again it depends of how many people are collaborated with the project and also if the project its a very scaleable and complexity product and needs to be indenpendently.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Eyder Garcia: About AWS in general i just have knowledge and bit of experience but i’ve use S3, EC2 for personal projects. its not difficult to me to learn new topics or standard I have techniques and skills to adapt and fast learning. For monitoring i can use the monitorint that provided the AWS platform and also Sentry for backend and frontend.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Eyder Garcia: I can stablish a kind of comparative table to find out the specific each of feature that belongs to each approach and then I analizy by exploring the key differencies, the scope, disadvantages and also make a research using AI and analyze the comparison
  • ANGEL VILLAVICENCIO strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    ANGEL VILLAVICENCIO: Sure. One of the main projects I worked on was a SaaS marketplace built with TypeScript, Next.js, and Node.js on AWS. On the frontend, we used SSR and ISR with Next.js to balance performance and dynamic data. A key challenge was handling frequently updated content without losing performance, so we combined ISR with client-side fetching using React Query. For state management, we avoided Redux and used React Query + local state, which simplified caching and improved performance. On the backend, we used NestJS for its modular architecture and scalability. The system was deployed on AWS using a serverless approach with Lambda, API Gateway, and S3 for storage. We integrated SQS for async processing (e.g., image handling, notifications) and used RDS with optimized queries for relational data. We also leveraged CloudFront for CDN and improved latency, and implemented infrastructure as code for consistent deployments. This allowed us to scale efficiently, decouple services, and maintain high performance under increasing load.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    ANGEL VILLAVICENCIO: Sure. I’ve worked extensively with serverless workflows using AWS Lambda and S3 in production environments. In one project, we built a pipeline for processing user-uploaded images. Files were uploaded to S3, which triggered Lambda functions for validation, transformation, and metadata storage. To handle scalability during peak traffic, we designed the system to be fully event-driven using S3 + SQS, allowing us to buffer and process jobs asynchronously without overloading the system. To mitigate cold starts, we kept Lambdas lightweight, optimized dependencies, and split functions by responsibility. For critical paths, we also tuned memory allocation to improve execution time. On the security side, we used IAM roles with least-privilege access, ensuring each Lambda only had permissions to the specific S3 buckets or services it needed. For deployment, we used infrastructure as code (AWS CDK) with CI/CD pipelines for consistent releases. For monitoring, we relied on CloudWatch for logs, metrics, and alerts, and also used AWS X-Ray for distributed tracing to analyze latency and debug issues across services. Additionally, in S3 we implemented lifecycle rules to optimize long-term storage costs, automatically transitioning objects to cheaper storage classes based on usage patterns. This setup allowed us to scale efficiently while maintaining reliability, observability, and cost efficiency.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    ANGEL VILLAVICENCIO: Sure, in one project, we initially had multiple systems running under a monolithic architecture. They were stable and working well, but as the business grew, the setup became costly and harder to scale. From a technical perspective, the ideal approach was to take time to analyze all the systems, define proper boundaries, and redesign the architecture into well-structured services. However, the business needed quick results and wasn’t willing to wait for a full redesign. The conflict was between building a clean, long-term scalable architecture versus delivering immediate improvements. We took an incremental approach. Instead of redesigning everything upfront, we identified the most critical systems and started decoupling them into independent services, keeping future scalability in mind. This allowed us to deliver value quickly while progressively improving the architecture. The trade-off was not having the full picture from the start, which required adjustments later. However, it helped us gain stakeholder trust and evolve the system without disrupting operations. We measured success through improved performance in key flows, reduced load on critical components, and faster feature delivery.
  • Ronaldo Sergio Jara Condori strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Ronaldo Sergio Jara Condori: One of my big tech challenge was creating an AI application called "Nawa method". I needed to choose all the tech stack and build all the architecture by myself, in this occasion I was evaluating NestJs and Express for backend , but comparing these alternatives I chose Nest over Express for two main reasons: - First, it has its own structured architecture, its very easy to follow and you don't have to create and new special architecture. And also is easier to follow for new team members, and it is very helpful specially when you want to grow your team. - Second, manage concurrency is easier than other frameworks, so on an application where you want to implement AI is very important to have a tool that helps with that. To frontend, I choose React with React context, the challenge here was how to structure the global state because the application was planing to have a lot of interactivity and gamification so it was a real challenge but I try to divide responsibilities creating different stores managing just one thing instead of have a huge one and it helps a lot to the develoment.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Ronaldo Sergio Jara Condori: Honestly, I don't have hands-on experience with AWS Lambda or S3. Infrastructure and cloud services haven't been my main focus so far. What I do have some experience with is monitoring and observability. I'll choose OpenTelemetry for monitoring and manage traces, logs and metrics. I think it's becoming the standard and it integrates well with most monitoring tools. For error tracking I've used Sentry, which I find very reliable for catching and diagnosing issues in production.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Ronaldo Sergio Jara Condori: Once I was building an MVP for a client that shifted mid-way from a CMS with AI to a completely different product — AI chatbots. The ideal solution would've been to remove the old code entirely, but given the timeline I kept both systems coexisting in the same repo, heavily commented the old code and moved forward. It created technical debt, but it was the necessary call. What saved me was having built a solid design system upfront — the UI stayed mostly intact, changes were mainly in routes and larger components. The client got their MVP on time, that was the measure that mattered.
  • Jean Carlos Vega Vidarte strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Jean Carlos Vega Vidarte: In a recent project, I contributed to a financial product affiliation platform using TypeScript, React with Next.js, and a Node.js backend. A key challenge was managing complex, interdependent state (coupons, campaigns, bank accounts), which I handled using Redux to ensure predictable updates and consistency across multi-step flows. On the backend, we adopted a microservices architecture with NestJS instead of Express due to its structured architecture, dependency injection, and TypeScript-first design, which improved maintainability, scalability, and team onboarding while enabling clear separation of domains like payments and user management.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Jean Carlos Vega Vidarte: I’ve implemented a serverless workflow using AWS Lambda and Amazon S3 to process PDF invoices that were scraped by another service. When a file was stored in S3, it triggered a Lambda function that handled parsing and processing. While the workload didn’t involve extreme scalability challenges, I still designed the flow to be event-driven and stateless, allowing it to scale automatically with incoming uploads. For security, I configured AWS Identity and Access Management roles with least-privilege access, ensuring the function could only read from the specific S3 bucket and write to required resources. For deployment, we used AWS CloudFormation to define and automate the provisioning of Lambdas, permissions, and triggers, which improved consistency across environments. Monitoring and logging were handled through CloudWatch, allowing us to track execution logs and debug issues when needed.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Jean Carlos Vega Vidarte: In a financial application, users reported they were unable to delete certain objects from the admin panel, which was blocking operational workflows. After investigating, I found the root cause in our multi-tenant architecture: we had foreign key relationships spanning different database schemas, and the ORM couldn’t reliably resolve those dependencies. The “ideal” solution would have been to fully normalize and strictly enforce cross-schema relationships, but that approach wasn’t practical given the ORM limitations and the immediate user impact. Instead, I made a pragmatic architectural pivot by constraining foreign keys to specific schemas and redesigning the deletion process—moving heavy dependency cleanup into an asynchronous workflow. This introduced some additional complexity in terms of eventual consistency, but it allowed deletions to complete reliably and without blocking the UI. We measured success by a drop in deletion-related errors, faster response times in the admin panel, and fewer support tickets from users experiencing this issue.
  • Carlos Hernández strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Carlos Hernández: In our digital signature application, we had a flow in which the user could scan a QR code through the cellphone to sign more easily We had to take the decision to see how to notify the desktop app that the user had signed by the phone. We decided using websockets instead of using long-polling as it's a more performant solution We used the socket-io library instead of a native WebSockets solutions as the rooms and namespaces management is easier that way. With this decision, we gained more performance on both the frontend and the backend sides
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Carlos Hernández: I created an API for scraping real state pages using a stack of API Gateway + Lambda and S3 for saving the images of the properties I created a Github Action workflow to handle the deployment of the Lambda funcion and improve the scability Of course, I had to create IAM roles for the Lambda function to access the S3 bucket For monitoring, I used CloudWatch logs and X-Ray logs
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Carlos Hernández: Sometimes deadlines are so short that doesn't give the team time to optimize solutions so it's always important to find spaces for fixing technical debt Once the deployment of the feature was finished, we detected a bit of slowness in some of the services We took ourselves some time to watch this, and we managed to reduce the response time and gained a lot of performance
Apr 28, 02:17Interview completed
KickpotHassan Ali YassineNot connected1NoNoApr 22, 13:47Registered
Moons Johan Mauricio Mosquera SolanoConnected1Yes1No
1
  • Giuseppe strong fit
    View conversation
    Sophie: ¿Cuál es tu experiencia previa en soporte dental?
    Giuseppe: Correcto, 5 anos
    Sophie: ¿Qué herramientas o software has utilizado en el soporte dental?
    Giuseppe: Hubspot CRM
    Sophie: ¿Cómo manejas situaciones de estrés con pacientes difíciles?
    Giuseppe: Con paciencia. Intento resolver yo con criterio, y si la situacion escala, contacto mi jefe
    Sophie: ¿Tienes disponibilidad para trabajar fines de semana o turnos irregulares?
    Giuseppe: Si
    Sophie: ¿Cómo te mantienes actualizado sobre las prácticas y tecnologías dentales?
    Giuseppe: Leyendo informacion en Perplexity
    Sophie: ¿Por qué quieres trabajar en esta clínica dental en particular?
    Giuseppe: Porque me interesa mucho su marca
    Sophie: ¿Cuál es tu disponibilidad para iniciar si fueras seleccionado?
    Giuseppe: Desde hoy
Apr 27, 21:13Interview completed
MoonsPeople HRConnected
3
  • Candidat · 2 candidates · created 2026-04-16
  • Acquisition · 2 candidates · created 2026-04-17
  • Revenue OPS · 18 candidates · created 2026-04-24
Yes22Yes
8
  • Susana
    View conversation
    Sophie: ¿Puedes contarme sobre una ocasión en la que realizaste una venta consultiva exitosamente?
    Susana: claro, en si nunca la he usado
    Sophie: ¿Puedes contarme sobre una ocasión en la que realizaste una venta consultiva exitosamente?
    Susana: cuando estaba en el colegio
    Sophie: ¿Cómo organizarias y planificarias tus visitas a los odontólogos para ofrecer los productos?
    Susana: Con una agenda en google spreadsheet, ahi organizo mis citas
    Sophie: ¿Cuál es tu estrategia para asegurar un cierre de venta efectivo?
    Susana: mi estrategia es poder enfocarme en resolver las objeciones
    Sophie: ¿Cómo manejas un cliente insatisfecho y qué pasos tomas para resolver su problema?
    Susana: analizo la situacion, el porque esta molesto y luego apoyo al cliente en el momento de resolver el problema.
    Sophie: ¿Qué tan disponible estás para comenzar en este puesto?
    Susana: puedo empezar de una
    Sophie: ¿Qué te motiva a trabajar en ventas como visitador Odontológico ofreciendo servicios?
    Susana: Me encanta las ventas
  • Johitan
    View conversation
    Sophie: ¿Puedes contarme sobre una ocasión en la que realizaste una venta consultiva exitosamente?
    Johitan: No me acuerdo
    Sophie: ¿Puedes contarme sobre una ocasión en la que realizaste una venta consultiva exitosamente?
    Johitan: Si, vendí un carro.
    Sophie: ¿Cómo organizarias y planificarias tus visitas a los odontólogos para ofrecer los productos?
    Johitan: Con una excelente planificación.
    Sophie: ¿Cuál es tu estrategia para asegurar un cierre de venta efectivo?
    Johitan: Dónde el cliente este seguro y tengamos un acuerdo mutuo.
    Sophie: ¿Cómo manejas un cliente insatisfecho y qué pasos tomas para resolver su problema?
    Johitan: Lo intente calmar, realizó investigación y genero una respuesta correspondiente.
    Sophie: ¿Qué tan disponible estás para comenzar en este puesto?
    Johitan: Muy dispuesto, puedo hacerlo ya.
    Sophie: ¿Qué te motiva a trabajar en ventas como visitador Odontológico ofreciendo servicios?
    Johitan: Soy una persona que es muy guerrera en la vida y me encantan las ventas.
  • Johan Mosquera
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Johan Mosquera: Si claro.
  • XIMENA SANCHEZ RIVERA
  • Víctor Terrazas
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Víctor Terrazas: Quedo pendiente a las preguntas Copylot, geminis, chat gpt
  • Eduardo
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Eduardo: En mi día a día utilizo ChatGPT y cuando estoy haciendo automatizaciones o si necesito tareas más complejas me gusta utilizar Claude. Cuando voy a documentar procesos, mejoras o nuevos proyectos, me gusta utilizar Notion con sus asistentes de IA.
  • José Manuel Delgado Guirola
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    José Manuel Delgado Guirola: Principalmente LLM y en mi caso Claude
  • José Manuel Sánchez Hernández
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    José Manuel Sánchez Hernández: LenIa para comparación de endosos de gastos médicos mayores y comparación de listados masivos para extraer o sintetizar la base
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    José Manuel Sánchez Hernández: Excel, Word y bloc de notas
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    José Manuel Sánchez Hernández: Excel a nivel avanzado, SQL y power BI a nivel básico
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    José Manuel Sánchez Hernández: En 2 semanas para entregar mis actividades
10
  • David Alejandro López Martínez strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    David Alejandro López Martínez: Actualmente utilizo herramientas de IA como ChatGPT para análisis de información, redacción y optimización de procesos. Me apoyo en Excel y Google Sheets para manejo de datos, y en Notion y Trello para organización y seguimiento de proyectos. También utilizo Canva para crear contenido visual de forma ágil, y estoy comenzando a integrar herramientas como Power BI y Zapier para análisis y automatización, con el objetivo de mejorar la eficiencia y la toma de decisiones.
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    David Alejandro López Martínez: Para gestionar procesos y tareas utilizo herramientas como Notion y Trello, donde organizo actividades por prioridades, fechas y seguimiento. Me apoyo en Google Calendar para la planificación diaria y recordatorios. Además, suelo trabajar con listas de control y revisiones periódicas para asegurar que no se quede nada pendiente. También utilizo herramientas como ChatGPT para estructurar procesos y optimizar la organización, lo que me ayuda a mantener un flujo de trabajo ordenado y eficiente.
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    David Alejandro López Martínez: He utilizado principalmente Excel y Google Sheets para organizar información, analizar métricas y dar seguimiento a datos mediante tablas dinámicas, filtros y reportes
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    David Alejandro López Martínez: Podría incorporarme de inmediato o en un plazo muy corto, ya que actualmente cuento con disponibilidad.
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    David Alejandro López Martínez: Si, claro
  • Berenice Gómez strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Berenice Gómez: ChatGPT (código), Google Gemini
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Berenice Gómez: Principalmente paquete Microsoft Excel, access, ERP
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Berenice Gómez: Excel, Power BI, y pocas veces SQL
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Berenice Gómez: Disponibilidad inmediata
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Berenice Gómez: Si
  • Marco Antonio Arroyo Romero opted out
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Marco Antonio Arroyo Romero: Chat Gpt y Breaze
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Marco Antonio Arroyo Romero: Eso va a depender mucho del CRM que estés manejando, ya que la mayoría cuenta con Workflow's que puedes programar o configurar ciertas funciones que trabajen de manera condicional para la gestión de cada actividad del funnel
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Marco Antonio Arroyo Romero: El CRM Hubspot cuenta con su propio apartado de informes personalizados, pero también he trabajado con Tableau
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Marco Antonio Arroyo Romero: De inmediato, no tengo proyectos pendientes ahora.
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Marco Antonio Arroyo Romero: Se podría negociar un esquema híbrido?
    Sophie: mid_interview_optout
    Marco Antonio Arroyo Romero: No
  • Emerson Kevin Martinez Ortiz strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Emerson Kevin Martinez Ortiz: Claude Gemini Chat Gpt Copilot Notion
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Emerson Kevin Martinez Ortiz: A demás de herramientas de IA utilizo herramientas tradicionales para asegurar la gestión solidaria de procesos y tareas Trabajo con notion para la organización de proyectos, Excel para conteo y validación de reportes y Outlook Teams para comunicación
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Emerson Kevin Martinez Ortiz: Consultas de información en sql para trabajar en su análisis en Excel utilizando funciones de búsqueda, lógicas y de cálculos, además utilizó automatizaciones en vba para después darle presentar a dirección en dashboards interactivos en power bi
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Emerson Kevin Martinez Ortiz: De inmediato
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Emerson Kevin Martinez Ortiz: Sí claro sin tema
  • Noe Perea strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Noe Perea: Principalmente me apoyo de Copilot para mis análisis BI y Anaplan
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Noe Perea: Notion y Asana
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Noe Perea: Power Bi, Tableau, SQL, Power Query y Phyton
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Noe Perea: Inmediatamente
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Noe Perea: Sin temas
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Noe Perea: Si
  • Isra Rendon strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Isra Rendon: Chat gpt , GitHub y Pytorch, únicamente
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Isra Rendon: Asana y Notion Mas que nada esos para hacer bitácoras o hacer cualquier correccion
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Isra Rendon: Solamente Phyton, Microsoft y tableau un poco
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Isra Rendon: pero el que más he utilizado es phyton
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Isra Rendon: Cuando la empresa lo indique
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Isra Rendon: Si
  • Samuel Jiménez strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Samuel Jiménez: Claro que sí
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Samuel Jiménez: Actualmente íntegro herramientas como Copilot para uso empresarial, la cual implementó para la gestión de análisis de tickets y su gestión, así como trabajos repetitivos o análisis de datos. Del lado personal, cuento con claude Code, la cual implementó para flujos de trabajo n8n, chatbots y gestión de documentaciónm
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Samuel Jiménez: Actualmente utilizo JIRA y Service now como herramientas para la gestión de incidencias y requerimientos, pero me apoyo bastante en Obsidian, la cual me ha sido de bastante utilidad para el mapeo de procesos empresariales y documentación técnica.
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Samuel Jiménez: Actualmente utilizo PowerBI, con ciertas funciones en pandas y matploblib (Python). En ciertos casos implemento el uso de la IA para la normalización y análisis de cierta información Anteriormente también he usado Matlab
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Samuel Jiménez: De inmediato
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Samuel Jiménez: Cuando ustedes requeiriesen Si
  • Miriam Jetzabel Espinoza Mejia opted out
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Miriam Jetzabel Espinoza Mejia: Claude
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Miriam Jetzabel Espinoza Mejia: Google Workspace
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Miriam Jetzabel Espinoza Mejia: Excel, Salesforce Google paquetería
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Miriam Jetzabel Espinoza Mejia: Estoy disponible
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Miriam Jetzabel Espinoza Mejia: Disposable vía remota
    Sophie: mid_interview_optout
    Miriam Jetzabel Espinoza Mejia: No
  • Efrain Ayala strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Efrain Ayala: Actualmente chatgpt principalmente
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Efrain Ayala: Paquetería de Office y/o plataformas empresarial por parte de la empresa que permiten gestionar planeación
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Efrain Ayala: Principalmente excel cuento con más experiencia
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Efrain Ayala: Aunque tengo experiencia usando herramientas de visualización como qlik y looker
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Efrain Ayala: De manera inmediata
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Efrain Ayala: Claro que sí, sin problema El horario de que hora a qué hora es ?
  • Cynthia Berenice Pelayo Moreno strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Cynthia Berenice Pelayo Moreno: ChatGPT, Gemini, Canva.
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Cynthia Berenice Pelayo Moreno: Notion
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Cynthia Berenice Pelayo Moreno: Estoy tomando un curso de power BI pero no lo he terminado aún.
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Cynthia Berenice Pelayo Moreno: En una semana en lo que entrego mi puesto actual.
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Cynthia Berenice Pelayo Moreno: Sí, sin inconveniente.
Apr 27, 21:24Interview completed
RicTESTRiccardo RussoDisconnected1Yes1No
1
  • John Doe weak fit
    View conversation
    Sophie: Can you describe a project where you used Django? What was your role?
    John Doe: non posso inviare un messaggio allo stesso numero che fa la intervista
    Sophie: How do you stay updated with Django's latest features and updates?
    John Doe: vedo che hai messo la test feature pure tu, grande
    Sophie: Can you share an experience where you had to collaborate with others on a project?
    John Doe: Funziona pero
    Sophie: Can you share an experience where you had to collaborate with others on a project?
    John Doe: allora il test bene unica cosa sento che forse é un po' lento
Apr 16, 19:02Interview completed
Vargas GarciaAlbertoNot connected
2
  • Test · 0 candidates · created 2026-04-15
  • test · 0 candidates · created 2026-04-16
NoNoApr 15, 18:11Registered
B4talentRonald BrancoConnected
2
  • SAP · 1 candidate · created 2026-04-07
  • SAP ABAP · 10 candidates · created 2026-04-24
Yes16Yes
1
  • Marcio Felippe
5
  • Gabriela de Souza Medeiros Massi medium fit
    View conversation
    Sophie: Qual sua experiência com programação em ABAP? Pode me contar sobre um projeto específico?
    Gabriela de Souza Medeiros Massi: Pode sim
    Sophie: Como você costuma identificar a causa-raiz de um problema que apareceu em sua base de código?
    Gabriela de Souza Medeiros Massi: Eu participei da implementado SAP mas na parte de impostos Como inbound de NF
    Sophie: Como você organiza seu tempo ao trabalhar em múltiplos projetos ou tarefas?
    Gabriela de Souza Medeiros Massi: Parametrização deJ1BTAX
    Sophie: Você já teve que se adaptar a alguma mudança inesperada no trabalho? Como lidou com isso?
    Gabriela de Souza Medeiros Massi: Eu costumo entender oque é prioridade e classificar para entender oque e necessário entregar primeiro Classificar em níveis de prioridade
    Sophie: Você está disponível para início imediato ou tem algum compromisso que precisamos considerar?
    Gabriela de Souza Medeiros Massi: Sim ! Há muitas mudanças Mudanças sempre são bem vindas mas claro sempre com muita organização
    Sophie: Quais melhores práticas de ABAP você sempre tenta seguir em seu trabalho?
    Gabriela de Souza Medeiros Massi: Eu estou trabalhando no momento e queria entender melhor sobre a proposta rs
    Sophie: Pode me contar sobre uma situação em que teve que explicar um conceito técnico para alguém não-técnico?
    Gabriela de Souza Medeiros Massi: Faço isso todos os dias na área tributaria! Explicar para outros departamento é bem complicado
    Sophie: Você já trabalhou com integração de sistemas? Como foi essa experiência?
    Gabriela de Souza Medeiros Massi: Mas sempre procuro exemplificar os impactos que um erro nesse departamento pode causar para empresa e tbm explicar o processo de ponta a ponta
  • Glauco Limeira strong fit
    View conversation
    Sophie: Qual sua experiência com programação em ABAP? Pode me contar sobre um projeto específico?
    Glauco Limeira: Sem problemas Hoje estou mais direcionado a atuação funcional e gestão de iniciativas SAP, especialmente em MM/Fiscal, com foco em projetos e interface com negócio. Tenho conhecimento em ABAP, mas não atuo como desenvolvedor atualmente.
    Sophie: Como você costuma identificar a causa-raiz de um problema que apareceu em sua base de código?
    Glauco Limeira: Normalmente começo validando o cenário com o usuário e reproduzindo o erro. Depois analiso logs, dumps (ST22) e traces (ST05). Quando envolve código, atuo junto com o time ABAP na análise técnica. Com isso conseguimos chegar na causa raiz e tratar da melhor forma, sempre alinhado com o negócio.
    Sophie: Como você organiza seu tempo ao trabalhar em múltiplos projetos ou tarefas?
    Glauco Limeira: Trabalho com múltiplos projetos hoje, então foco muito em priorização por impacto no negócio e prazos. Organizo tudo em backlog/listas e mantenho alinhamento constante com stakeholders. Também faço replanejamento quando necessário pra garantir entrega com qualidade.
    Sophie: Você já teve que se adaptar a alguma mudança inesperada no trabalho? Como lidou com isso?
    Glauco Limeira: Sim, principalmente em projetos SAP onde mudanças de escopo acontecem bastante. Em um caso recente, tivemos alteração de prioridades no meio da execução. Reorganizei o planejamento junto com o negócio, alinhei expectativas e redistribuí as demandas. Com isso conseguimos manter as entregas com qualidade mesmo com a mudança
    Sophie: Você está disponível para início imediato ou tem algum compromisso que precisamos considerar?
    Glauco Limeira: Hoje estou trabalhando, então teria um prazo de transição, mas consigo me organizar para início em curto prazo. Tenho preferência por modelo remoto ou híbrido, onde consigo ter melhor produtividade.
    Sophie: Quais melhores práticas de ABAP você sempre tenta seguir em seu trabalho?
    Glauco Limeira: Tenho conhecimento em ABAP e acompanho as boas práticas, principalmente por atuar junto com times técnicos. Não atuo como desenvolvedor no dia a dia, mas tenho base suficiente pra discutir soluções, avaliar qualidade do código e garantir alinhamento com o negócio.
    Sophie: Pode me contar sobre uma situação em que teve que explicar um conceito técnico para alguém não-técnico?
    Glauco Limeira: Sim, isso é bem comum no meu dia a dia. Já tive situações em projetos SAP onde precisei explicar mudanças técnicas para áreas de negócio. Sempre busco traduzir para o impacto prático no processo, usando exemplos do dia a dia e evitando termos técnicos. Isso ajuda muito no alinhamento e na tomada de decisão.
    Sophie: Você já trabalhou com integração de sistemas? Como foi essa experiência?
    Glauco Limeira: Sim, já atuei bastante com integração de sistemas, principalmente envolvendo SAP e outros sistemas corporativos. Participo desde o desenho dos fluxos até acompanhamento com times técnicos, testes integrados e validação com o negócio. É algo bem presente no meu dia a dia.
  • Oswaldo Gonçalves de Lima Junior medium fit
    View conversation
    Sophie: Qual sua experiência com programação em ABAP? Pode me contar sobre um projeto específico?
    Oswaldo Gonçalves de Lima Junior: Só confirmando nossa reunião agora às 15:30 está confirmada né ?
    Sophie: Como você costuma identificar a causa-raiz de um problema que apareceu em sua base de código?
    Oswaldo Gonçalves de Lima Junior: Olá Ronald tudo certo e com você? Pode ser na sexta às 15 ?
    Sophie: Como você organiza seu tempo ao trabalhar em múltiplos projetos ou tarefas?
    Oswaldo Gonçalves de Lima Junior: Eu uso muito o planner para me ajudar com as tarefas do trabalho, e sou muito adepto ao famoso “caderninho“ ainda, eu gosto muito de anotar tudo, listar e executar conforme criticidade
    Sophie: Você já teve que se adaptar a alguma mudança inesperada no trabalho? Como lidou com isso?
    Oswaldo Gonçalves de Lima Junior: Como eu estou há 17 anos aqui na empresa, eu já passei por inúmeras mudanças, tanto de gestão, estrutura organizacional, diretoria, então estou muito acostumado com isso, nunca tive dificuldades para me adaptar
    Sophie: Você está disponível para início imediato ou tem algum compromisso que precisamos considerar?
    Oswaldo Gonçalves de Lima Junior: Olha eu estou há muito tempo na empresa, eu não gostaria de deixá-los na mão de uma hora pra outra, como lhe disse gostaria que se fosse possível pelo menos de uma a duas semanas para que eu possa me organizar.
    Sophie: Quais melhores práticas de ABAP você sempre tenta seguir em seu trabalho?
    Oswaldo Gonçalves de Lima Junior: Conforme conversamos eu não faço parte do desenvolvimento ABAP , porém as minhas especificações têm sempre um padrão, e são o mais detalhado possível para que o trabalho do ABAP seja mais fácil e assertivo
    Sophie: Pode me contar sobre uma situação em que teve que explicar um conceito técnico para alguém não-técnico?
    Oswaldo Gonçalves de Lima Junior: Obrigado
    Sophie: Você já trabalhou com integração de sistemas? Como foi essa experiência?
    Oswaldo Gonçalves de Lima Junior: Ronald gostaria de confirmar se essas perguntas tem algo relacionado com a ferramenta que vocês estão testando, ou se realmente eu devo respondê-las?
  • Carla Sousa strong fit
    View conversation
    Sophie: Qual a sua experiência com SAP ABAP? Pode me dar um exemplo de um projeto que você trabalhou?
    Carla Sousa: Sou abap há 14 anos. Minha última experiência foi num projeto da Trijay, cliente Santa Cruz. Era AMS! Mas anteriormente atuei em outras consultorias e projetos.
    Sophie: Como você identifica a causa-raiz de um problema tecnológico?
    Carla Sousa: Desculpe a demora, estava dirigindo. Sobre o problema, vejo o erro e tento reproduzir, debugo, analiso o código, se tem sujeira. Se algo foi transportado recentemente.
    Sophie: Como você organiza seu tempo em projetos com prazos difíceis?
    Carla Sousa: Tento priorizar o que destrava mais rápido e tem impacto no negócio, quebro em tarefas pequenas e vou entregando em sequência. Alinhando com o time pra evitar surpresa e reviso o plano todo dia.
    Sophie: Você já trabalhou com equipe multidisciplinar? Como se comunica com profissionais de diferentes áreas?
    Carla Sousa: Sim, muitas vezes. Na maioria das vezes com funcional falo de regra/processo, com técnico vou direto no detalhe. Valido entendimento, evito ruído e mantenho alinhamento constante.
    Sophie: Você está disponível para começar imediatamente ou tem um prazo em mente?
    Carla Sousa: Disponível pra início imediato!
    Sophie: Quais melhores práticas de ABAP você sempre tenta seguir em seu trabalho?
    Carla Sousa: Mantenho limpo, nomes claros, mantendo padrão. Reuso antes de reinventar, como BAPI, classes, CDS. Alinho performance, como select orreto, evitando loop desnecessário. Trato exceções e mensagens direito. Respeitar padrão SAP e evitar hardcode.
    Sophie: Você tem experiência com rotinas de recebimento Fiscal e Financeiro? Pode me contar um pouco?
    Carla Sousa: Sim. Atuei com MIRO/MIGO e integração MM/FI. Validação de impostos, CFOP, duplicatas e contabilização. Contas a pagar, atualizações e integrações. Ajustes em exits/BADIs e reports pra conferência e conciliação. Também tratei divergências entre pedido, nota e recebimento. Também com nova regra fiscal brasileira.
    Sophie: Pode me contar sobre uma situação em que teve que explicar um conceito técnico para alguém não-técnico?
    Carla Sousa: Simplifico a linguagem, uso exemplos do dia a dia e foco no impacto pro negócio. Evito termos técnicos e valido se a pessoa entendeu antes de seguir.
    Sophie: Você já trabalhou com integração de sistemas? Como foi essa experiência?
    Carla Sousa: Já atuei integrando SAP com outros sistemas, alinhando regras, formatos de dados e validações. Participei desde o desenho até testes e acompanhamento em produção. Também cuidei de erros, reprocesso e alinhamento com outras áreas pra manter tudo estável. Foi bem desafiador mas muito enriquecedor.
  • Marco Antonio strong fit
    View conversation
    Sophie: Qual a sua experiência com SAP ABAP? Pode me dar um exemplo de um projeto que você trabalhou?
    Marco Antonio: Vou listar alguns exemplos, mas se preferir, podemos conversar
    Sophie: Qual a sua experiência com SAP ABAP? Pode me dar um exemplo de um projeto que você trabalhou?
    Marco Antonio: Sou consultor ABAP com 5 anos de experiência, sendo que 4 foram em empresas multinacionais. Já trabalhei com vários projetos SAP: * Suporte AMS -> Resolução de problemas, melhorias em programas e processos já existentes e criação de novos programas e processos para atender o business. * Projeto NT(Nota Técnica) -> Organização de aplicação de Snotes a fim de atender os requisitos legais do governo. * Migração ECC para S/4HANA -> Responsabilidade pela adaptação de programas customizados, validação de tabelas de Tax e Pricing migradas, e execução de testes end-to-end. Colaboração próxima com equipes globais, garantindo alinhamento e sucesso nas entregas ao longo de todo o ciclo de migração. * John Deere Parts Support (MM e SD)-> Entrega de soluções de melhoria contínua, com redução de 60% na fila de incidentes por meio de análise de causa raiz (RCA), em colaboração com equipes de negócio e operações para garantir resoluções eficazes e sustentáveis. * Heineken Toll Voucher Automation -> Desenvolvimento de integrações com terceiros utilizando e consumindo serviços Odata, tratando cargas XML recebidas via Web Services de um sistema de Transportation Management e transmitindo os dados processados para plataformas externas. Desenvolvimento e suporte à implementação de processos automatizados de cálculo de pedágios e compras, com ganho de eficiência e desempenho operacional.
    Sophie: Como você identifica a causa-raiz de um problema tecnológico?
    Marco Antonio: Entre outros projetos
    Sophie: Como você identifica a causa-raiz de um problema tecnológico?
    Marco Antonio: Primeiro, entro em contato com quem está com o problema, seja pessoa de business ou da operação. Segundo, entendo como replicar o problema. Terceiro, realizo Debugging avançado enquanto reproduzo o problema para identificar quais as possíveis causas
    Sophie: Como você organiza seu tempo em projetos com prazos difíceis?
    Marco Antonio: Procuro organizar meu trabalho criando Cards com tarefas separadas, além de definir seus prazos e prioridades. Então sempre sei por onde começar, onde estou e para onde vou.
    Sophie: Você já trabalhou com equipe multidisciplinar? Como se comunica com profissionais de diferentes áreas?
    Marco Antonio: As equipes em que trablhei são as comuns no mundo SAP: Desenvolvedores e analistas. Me comunico com profissionais de diferentes áreas com curiosidade para entender o trabalho deles e procurando gaps em que eu como desenvolvedor possa melhorar algum processo para a pessoa
    Sophie: Você está disponível para começar imediatamente ou tem um prazo em mente?
    Marco Antonio: Estou disponível para começar imediatamente
    Sophie: Quais melhores práticas de ABAP você sempre tenta seguir em seu trabalho?
    Marco Antonio: - Desenvolvo com a sintaxe do ABAP 7.4 (Inline), o que a SAP garante que é mais performático. - Estou me aprimorando em CDS views para usá-las em ambientes S/4HANA, o que faz com que as buscas em tabelas (push-down) sejam muito mais rápidas, além de seguir os princípios do clean core. - Sigo o conceito OO, assim mantendo os objetos mais organizados. - Coloco comentários em todo o código, para facilitar o entendimento do que o código está fazendo - Quando finalizo o desenvolvimento de um programa, realizo o ATC para verificar se há algum warning ou erro, além de realizar também o teste de performance
    Sophie: Você tem experiência com rotinas de recebimento Fiscal e Financeiro? Pode me contar um pouco?
    Marco Antonio: Já trabalhei no módulo de FI, mas foi voltado a Warranty, então não tenho experiencia com recebimento fiscal e financeiro. Mas tenho vontade de aprender. Encaro qualquer novo desafio e sempre consigo chegar na solução
    Sophie: Pode me contar sobre uma situação em que teve que explicar um conceito técnico para alguém não-técnico?
    Marco Antonio: Sim, sou uma pessoa que gosta de transmitir conhecimento para as pessoas. E como sou muito paciente, isso ajuda bastante no aprendizado da outra pessoa. Já ensinei o básico deo ABAP para funcionais SAP( para que eles consigam debugar um programa) e para analista de dados( para que eles consigam começar a desenvolver em abap)
    Sophie: Você já trabalhou com integração de sistemas? Como foi essa experiência?
    Marco Antonio: Sim, em minha última experiencia na Heineken participei de um projeto para automação de compras de vale-pedágio para os motoristas (Projeto Toll Voucher) Nesse projeto, utilizei um webservice já existente para consumir dados vindos de um sistema chamado Shipsy, que é onde os transportadores cadastravam motorista, renavam, placa do caminhão etc. Com os dados recebidos, eu manipulava os dados no SAP e enviava para outro sistema terceiro via Odata Service. Esse sistema realiza a compra do vale pedágio de acordo com as informações requeridas por lei que são enviadas pelo SAP
Apr 27, 20:09Candidates invited
willdomVeronica CruzDisconnected1NoNoMar 31, 22:56Registered
Moons DentalVillegasDisconnectedNoNoMar 24, 21:23Registered
ReveloTamyris Cuppari KohlerDisconnected
2
  • Candidates 2 · 4 candidates · created 2026-03-24
  • Lari · 1 candidate · created 2026-03-24
No1No
1
  • Larissa Netto opted out
Mar 24, 20:06Candidates invited
Revelo Talent CorpBeatriz PazConnected1Yes2No
2
  • Outro Teste strong fit
    View conversation
    Sophie: What do you think are crucial skills for an LLM Trainer?
    Outro Teste: But what’s your name?
    Sophie: What’s your availability for starting this role?
    Outro Teste: Tomorrow
  • Bia Paz medium fit
    View conversation
    Sophie: What do you think are crucial skills for an LLM Trainer?
    Bia Paz: I think LLM trainers need to be very good at football
    Sophie: What’s your availability for starting this role?
    Bia Paz: In 2 months
Apr 13, 21:12Interview completed
TeclaAgustina FaraceNot connectedNoNoMar 10, 17:12Registered

2. Organization Activity

CompanyAdmin EmailRegistered Audiences CreatedCandidates Added Campaigns CreatedCandidates Invited Actually SentCompleted
Talentlymartin.delaflor@talently.tech14214242
13
  • Kevin Abel Brito Díaz strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Kevin Abel Brito Díaz: One production-grade project that comes to mind was a large e-commerce initiative for Paris CL (Cencosud), where I worked as part of the engineering team delivering scalable customer-facing features and backend integrations. The frontend stack was based on React, and the team selected this ecosystem because most developers already had strong experience with it, which accelerated delivery speed and reduced onboarding friction. We also benefited from the maturity of the ecosystem and strong community support. On the backend, we chose NestJS over Express because the project required a more structured and scalable architecture. NestJS provided clear module separation, dependency injection, decorators, and better maintainability for a growing codebase with multiple contributors. This was especially valuable as the platform evolved and new services were added. One of the main technical challenges was state consistency and integration between frontend flows and backend services, particularly in e-commerce processes where pricing, promotions, and inventory needed to stay synchronized. We addressed this by keeping frontend state predictable and centralizing business logic in backend services instead of duplicating rules on the client side. From an engineering perspective, NestJS helped standardize patterns across the team, improve code organization, and simplify testing. The final outcome was faster collaboration, easier maintenance, and a platform architecture better prepared for future growth.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Kevin Abel Brito Díaz: I’ve worked with AWS services in environments where serverless components were used to support integrations, file processing, and event-driven workflows. A representative example involved handling file uploads and automated processing flows using Amazon S3 as the storage layer and Lambda functions to trigger validations and downstream processing after new files were uploaded. One important challenge was scalability during peak usage periods, where multiple files could be uploaded simultaneously. The advantage of Lambda was the ability to scale automatically based on incoming events, which reduced the need to manage dedicated infrastructure. Regarding cold starts, we minimized impact by keeping functions lightweight, reducing unnecessary dependencies, and designing logic to execute efficiently. For time-sensitive flows, we also paid attention to runtime selection and package size. From a security standpoint, we followed least-privilege principles with IAM roles, ensuring each Lambda function only had access to the exact S3 buckets or AWS resources required. For deployment, we used structured CI/CD workflows with version-controlled environments, which helped keep releases consistent and traceable. For monitoring, we relied on centralized logs and execution metrics to track failures, duration, and retries, allowing quick troubleshooting and continuous optimization. Overall, my focus in these architectures has always been reliability, maintainability, and operational efficiency.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Kevin Abel Brito Díaz: One example that comes to mind was during an e-commerce project where users were experiencing friction during checkout because some product availability and promotional data were not updating fast enough in the purchase flow. From a technical perspective, the “ideal” solution was to build a more complete real-time distributed architecture with multiple synchronized services and deeper event-driven communication. However, that approach required more implementation time and added unnecessary complexity for the immediate business need. The necessary user-focused solution was to simplify the architecture and prioritize response speed and consistency in the checkout experience. We reworked critical API flows, optimized caching rules, and centralized some business validations server-side to reduce mismatches between frontend data and backend responses. The result was a smoother checkout process, fewer cart abandonment issues related to pricing or stock inconsistencies, and faster page interactions during peak traffic periods. We measured success through lower incident reports, improved response times, and better conversion behavior in the affected checkout steps. That experience reinforced an important principle for me: elegant architecture matters, but solving the user’s pain point at the right time matters more.
  • MARCOS LEBRON strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    MARCOS LEBRON: One production-grade project I worked on was the modernization of a healthcare prescription platform. The existing system had a legacy frontend and backend, and the goal was to move to a more scalable architecture with a modern frontend, stronger api contracts, better validation, and a foundation that could support multi-tenant usage for doctors, clinics, pharmacies, and administrators. One of the main technical challenges was managing the prescription flow. it was not just a simple form. The user had to select a patient, choose medications, validate prescription rules, attach doctor information, generate a pdf and preserve enough state so the experience felt fast and reliable. I treated that flow almost like a prescription session where the frontend state represented the current draft while the backend remained the source of truth. for State management, I avoided putting everything into a global store. Instead, I separated the state into different layers. Local ui state stayed inside components, form state was handled closer to the form layer and server state was handled through api driven data fetching patterns. The idea was to avoid a giant global state object that becomes hard to maintain. For shared domain state, like the selected tenant or authenticated user context, I kept that at a higher level because many modules needed access to it. On the rendering side, Next.js was useful because not every page needed the same strategy. Public or semi-static content could benefit from server rendering or static generation, while authenticated areas were more dynamic and API-driven. For example, dashboard and prescription flows needed fresh data and authorization checks, while informational pages or configuration-driven views could be optimized differently. Another feature related to the backend was BFF for different microfrontend applications that I needed to structure. For backend architecture, if I had to choose between NestJS and Express in a similar production-grade environment, I would choose based on the size and complexity. Express is great when you need something lightweight. But if the API has multiple modules, validation rules, authentication, role-based access, and many developers working on it, I prefer NestJS because it gives you a stronger structure through modules, dependency injection, DTOs, guards, and interceptors.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    MARCOS LEBRON: I have experience working with AWS services like S3 and ECS, and also using Lambda and s3 for asynchronous processing. I implemented a file processiong pipeline. The user uploaded the file to s3 and then that s3 event triggered a lambda function. The lambda then validates the file, extracts metadata, transform it. the reason I like this aproach is that it keeps heavy work out of the main api requests. Instead of making the user wait while the backend processes the file, the api can upload a presigned url, store the file in s3 and let Lambda handle the processing in the background. For a cold start, I keep the lambda package small and avoid unnecessary dependencies. I prefer node or python runtimes is faster compared to java or c#. For security i applied least privilege if a function only reads from one S3 bucket, the policy should be scoped to that specific ARN nothing bigger than that, use aws secrets manager for credentials. for deployment i prefer to use CI CD and IAC, that could be SAM, serverless framework, or Terraform. The important thing is that lambda, events, IAM roles and env variables are defined in code and deployed constantly. for monitoring i use cloudwatch for logs and metrics, set alarms and deade letter queue so events are not lost.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    MARCOS LEBRON: A good example was a time at epam where i worked on a large enterprise frontend ecosystem with React, typescript, microfrontend, shared ui libraries, and bff layer. the user pain point was coming from both developers and product teams. we had around 25 applications, and teams were struggling with inconsistent frontend patterns, duplicated logic, slow local builds and repeated APO transformations logic across different microfrontends. So features took longer to deliver small ui changes required more coordination than they should, and onboarding new devs was slow, The ideal technical solution would probably have been to redesign the whole platform from scratch and make every application follow the same perfect architecture. But that was not realistic because these were production apps, teams were actively delivering features, and we could not pause the business to do a full rewrite. The difficult part was balancing technical idealism with what users actually needed. As engineers, we often want the cleanest possible architecture, but the users need faster builds, more consistency, and less friction immediately. So we chose an incremental migration instead of a risky full rewrite. so we made a more pragmatic architectural decision. Instead of trying to fix everything at once, we focused on the biggest pain points. We migrated the build system from Webpack to Vite, improved the shared UI libraries, and used the BFF layer to simplify how the frontend consumed backend data. We measured success with both technical and practical signals. The biggest concrete metric was that the Vite migration reduced build times by more than 50%. We also saw better developer experience, faster local feedback loops, more reuse from shared packages, and cleaner integration through the BFF. The takeaway for me was that architecture should solve real pain. In this case, the right decision was not to chase a perfect rewrite, but to make targeted architectural changes that improved delivery speed and system consistency while keeping production stable.
  • Andres Rocha
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Andres Rocha: Sure, one project that comes to mind is an investment design platform I worked on, where users could build and simulate capitalization tables in real time. Sounds straightforward until you factor in that every single input change needs to cascade instantly — dilution percentages, valuations, equity breakdowns — all live, all consistent across multiple investor profiles simultaneously. That was the real challenge. The core problem was state management. Cap tables are inherently interdependent, so a change in one cell isn't isolated — it triggers a chain of recalculations. We couldn't let the UI ever show a half-updated state, and we couldn't afford race conditions during simulation runs either. The solution was splitting the workload: heavy financial calculations went through RabbitMQ as background jobs, while the UI stayed responsive through optimistic updates and WebSocket-pushed confirmations. That separation of concerns was what kept the system from becoming a nightmare to reason about. On the backend, we evaluated Express versus NestJS, and the arguments for NestJS were pretty clear once we mapped out the requirements. Express gives you freedom, but on a financial platform with a larger team, that freedom tends to become inconsistency over time — everyone structures their modules differently, dependency management gets messy, and testing becomes painful. NestJS forces a modular, DI-based architecture from day one. That structure was worth more than any flexibility we were giving up. The TypeScript integration was another big factor. We were using strictly typed DTOs for every calculation service, and that caught a surprising number of potential bugs before they ever hit runtime. For financial logic, that's not a nice-to-have — it's a requirement. We also got clean patterns for the RabbitMQ microservice integration and WebSocket handling out of the box, rather than wiring all of that manually. In practice, we reduced boilerplate by around 30%, test coverage improved significantly, and the frontend team could mock services without needing to coordinate with us constantly — which made a real difference in sprint velocity. On the frontend, we used Next.js with ISR for the dashboard layer. Investment templates don't change by the second — they're read-heavy but do get updated periodically. ISR was the right fit: pages are pre-rendered statically, but when a user pushes an update, it propagates globally without a full rebuild. We kept LCP under 1.5s even on views with dense data visualizations, which was one of our harder non-functional requirements to hit.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Andres Rocha: Most of my hands-on Lambda and S3 experience comes from my time at Konfio, where we ran a fairly large AWS-based architecture handling payment orders, KYC flows, and document processing across multiple microservices. A concrete case where scalability became a real concern was around document ingestion in the KYC platform. We had periods where a high volume of users were uploading identity documents simultaneously — images, PDFs, that kind of thing. The initial approach of routing uploads through the Lambda directly became a bottleneck fast: the function was handling both the HTTP layer and the S3 write, which under load created timeouts and inconsistent responses. The fix was straightforward once we identified it — we moved to presigned URLs, so the Lambda's only job was generating a short-lived, scoped URL that the client used to upload directly to S3. That decoupled the load from the function entirely and let S3 handle the ingestion at its own scale, which it does well. On cold starts — this was something we had to think about carefully for Lambdas that sat in critical paths, like the ones triggering downstream processes after a document was validated. Cold start latency in those cases was noticeable. The approach we settled on was Provisioned Concurrency for the functions that needed to respond fast, keeping a set number of instances warm at all times. For less critical functions, we used scheduled EventBridge rules as a lighter alternative — just enough to keep them from going completely cold during business hours. The tradeoff with Provisioned Concurrency is cost, so we were selective about where we applied it. For IAM, the rule we followed was strict least-privilege per function. Each Lambda had its own role scoped to exactly what it needed — if a function's job was to read from a specific S3 bucket, its role only allowed s3:GetObject on that bucket's ARN, nothing broader. This mattered especially in a fintech environment where a misconfigured or accidentally triggered function shouldn't be able to touch data it has no business accessing. We also used resource-based policies on the S3 side to add a second layer — so even if a role was misconfigured, the bucket itself would reject unauthorized actions. Deployment was managed through Terraform for infrastructure definitions and CodePipeline for the CI/CD side. Lambda configurations, IAM roles, S3 bucket policies — all version-controlled and applied through the pipeline, which made environment parity between staging and production much easier to maintain. For monitoring, CloudWatch was the primary tool — we had log groups per function, metric filters for error rates and duration, and dashboards aggregating the most critical functions. For anything that needed deeper tracing across services, we used X-Ray to follow a request through the chain and identify where latency or failures were actually happening.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Andres Rocha: This one comes directly from the KYC platform at Konfio, and it's a good example of how skipping proper discovery with the actual end user can force a significant architectural rework down the line. The platform was originally built to support a document review flow for internal KYC analysts — the people responsible for validating loan applications. The initial design was straightforward: reviewers could see the most recent document uploaded by the applicant for each required field. That seemed reasonable at the time, but it turned out to be a critical gap. As the volume of applications grew, the team started flagging an unusual number of false rejections. Analysts were rejecting applications that should have been approved, and when we dug into the cases, a pattern emerged. Applicants would upload a document, it would get flagged for some reason — expired ID, wrong format, incomplete information — and they'd upload a new version. But the reviewer had no visibility into the previous submissions. They couldn't tell whether the issue had actually been corrected or whether the new document introduced a different problem. Without that context, decisions were being made on incomplete information. There was also a separate but related problem we uncovered during the same investigation. Sales reps, who were often sitting with applicants during the onboarding process, were sometimes uploading documents from their own accounts or pushing files through unofficial channels when something wasn't going through correctly. That was generating file inconsistencies on the platform side and bypassing the validation pipeline entirely. That category alone accounted for roughly 40% of rejection errors across all applications. The conflict was architectural. The system had been designed around a single-document-per-requirement model — there was no concept of document history or versioning at the data level. To fix this properly, we needed to restructure how documents were associated with applications: instead of a flat reference to the latest file, we needed a versioned relationship between each document type and all its submissions across the lifecycle of a request. That change wasn't isolated — it touched the storage layer, the review UI, the validation logic, and several downstream processes that consumed document state. The decision to do it properly rather than patch around it was driven by the KYC team themselves. Once we sat down with the analysts and mapped out how they actually worked through a review — what information they needed, what context they were missing — it became clear that a surface-level fix wouldn't hold. We needed to give them a tool that matched their actual workflow. We measured the outcome mostly through rejection error rates. False positives dropped significantly once reviewers had full document history available. The sales rep workaround problem was addressed through stricter upload attribution and validation at the point of submission, which closed that gap at the source. The broader result was a faster, more confident review process — fewer escalations, less back-and-forth with applicants, and a meaningful reduction in the time vendors were spending re-collecting documents that had already been submitted correctly.
  • Sergio Matamoros strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Sergio Matamoros: Developing the Ariel Scheduling System required a robust architecture using NestJS for a modular, type-safe backend and Next.js to leverage Incremental Static Regeneration for optimal performance. The core technical challenge involved managing real-time booking synchronization to prevent race conditions, which I solved by implementing optimistic updates via TanStack Query and WebSockets for immediate feedback and data integrity. This choice of a structured framework over a more minimal approach like Express allowed for seamless validation through DTOs and a scalable codebase that maintained high speed and reliability even as complex features like inventory management were integrated.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Sergio Matamoros: I have implemented serverless workflows using AWS Lambda and S3 to automate high-volume data processing, specifically for managing dynamic assets and inventory updates in a production environment. To address the challenge of cold starts, I optimized the execution environment by minimizing the deployment package size and utilizing Provisioned Concurrency for time-sensitive triggers, ensuring the system remained responsive during sudden traffic spikes. Scalability was managed through S3 event notifications that triggered Lambda functions asynchronously, while security was strictly enforced using the principle of least privilege with granular IAM roles, limiting function access only to the specific S3 prefixes and database rows required for the task. For deployment and monitoring, I utilized the AWS Serverless Application Model (SAM) alongside CloudWatch for real-time logging and performance metrics, allowing me to track execution duration and memory usage to ensure the workflow remained cost-effective and resilient under heavy loads.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Sergio Matamoros: While developing the Ariel Scheduling System, a significant user pain point emerged when clients in areas with unstable internet connectivity reported that the application would freeze or lose booking data during brief signal drops, leading to a "necessary" pivot toward a robust offline-first synchronization strategy. The "ideal" technical solution from a developer's perspective was a strictly server-authoritative model that ensured absolute data consistency by requiring a constant connection, but the "necessary" user solution demanded that the interface remain interactive even when offline. This forced an architectural shift to implement a local-first state using IndexedDB through TanStack Query's persistent cache, allowing users to continue the booking flow while the application queued requests locally. The conflict was resolved by creating a synchronization engine that handled background reconciliation once the connection was restored, prioritizing the user's intent over immediate server confirmation. Success was measured by a 35% decrease in abandoned booking sessions and a significant drop in support tickets related to "lost data," proving that sacrificing a purely synchronous architecture for a more resilient, client-side caching model was essential for real-world usability in the technician's specific operating environment.
  • Sebastián Josué Montero Madrigal strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Sebastián Josué Montero Madrigal: while my strongest production experience has been backend-focused, i've been applying full stack principles using TypeScript, React, and Node.js concepts i built a REST API whit ASP.NET Core for managing a motorcycle inventory system, using a layered architecture with controllers, services, and data access A key challenge was maintaining scalability and clean separation of concerns, which i solved by isolating business logic in a service layer and using dependency injection this is why i would choose NestJS over Express in Node.js, since it enforces a similar structured approach On the frontend side I've worked whit React focusing on state managmente and component design, ensuring efficient data flow and avoiding unnecessary re-renders Regarding Next.js i understand trade-offs like SSR vs ISR for frequently changing data, such as inventory status i would use SSR for consistency while ISR would be useful for less dynamic content to improve preformance overall i bring strong backend fundamentals and a growing practiacl undestanding of full stack development especially around architecture and data flow
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Sebastián Josué Montero Madrigal: i haven't yet implemented a full production workflow using AWS lambda and S3, but i have studied and practiced serverless architecture concepts and how they apply to scalable systems from my backend experience building REST APIs, i understand how to desing stateless services, which translates well to Lambda functions, for example in a system like my motorcycle inventory API, i would use Lambda to handle event-driven tasks such as processing updates or handling file uploads whit S3 used for storage regarding scalability one advantage of Lambda is it automatic scaling but im aware of challenges like cold starts to mitigate this i would minimize function size, reduce dependencies and consider strategies like keeping functions warm for latency-sensitive operations for security i undestand the importance of IAM roles and least-privilege access, each Lambda function should only have permissions strictly necessary, for example limited access to specific S3 buckets or actions for deplayment and monitoring i'm familiar with tools like AWS SAM or Serverless Framework for structured deployments, and CloudWatch for logging and performance monitoring, even though i'm still building hands-on experience with these tools, i have a solid understanding of how to apply them in real scenarios overall while my direct production experience whit AWS serverless is still growing, i have a strong foundation in backend architecture and i'm actively working on applying these concepts in could enviroments
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Sebastián Josué Montero Madrigal: in one of my backend projects i built a motorcycle inventory system where users needed to quickly view and update vehicle availability, initially i designed the system with a more "ideal" architecture-strict separation of layers normalized data structures, and multiple relational joins to ensure consistency and scalability however during testing i realized this approach introduced latency when retrieving data, especially for common user actions like listing available vehicles from a technical perspective, the design was clean and correct, but it didn't provide the responsiveness users needed the pivot was to optimize for the user experience rather than strict architectural purity, i introduced more efficient queries and simplified some data access patterns, even if that meant slightly relaxing normalization in certain areas this reduced response times significantly and made the system feel much more responsive the main conflict was betwwen maintraining a perfectly structured, theoretically ideal backend versus delivering fast, practical performance for end users, i chose to prioritize the user experience ehile still keeping the system maintainable to measure success i compared response times before and after the changes and validated that common operations became noticeably faster, additionally the system handled frequent queries more efficiently without affcting stability this experience reinforced the importance of balancing clean architecture with real world user needs, especially in performance-sensitive applications
  • José Torres strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    José Torres: For a recent high-demand project (called Project Windfall), I set up a monorepo with Turborepo using Next.js and NestJS. I chose the NestJS library over Express for the backend because, since it’s a system with complex business logic, I needed a modular and robust architecture that would facilitate dependency injection and long-term maintenance, avoiding the clutter that often arises in Express as the project scales. The biggest challenge was managing real-time state for thousands of concurrent users. To avoid compromising performance, I separated the logic: I used TanStack Query for server-side caching and Zustand for atomic global state. This, combined with a Redis layer on the backend and database index optimization, reduced response latency from 500 ms to less than 100 ms, enabling smooth updates without unnecessary re-renders. I used TypeScript to share data models between the backend and frontend within the monorepo, achieving end-to-end type safety, which virtually eliminated all integration errors. Ultimately, these decisions enabled the platform to handle critical traffic spikes while continuing to deliver a robust user experience. I employed the modular atomic design methodology to make the components more modular and scalable.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    José Torres: I've implemented serverless workflows using S3 as a trigger for AWS Lambda functions dedicated to file processing and data extraction. To handle scalability and prevent bottlenecks, I used Amazon SQS as a buffer, allowing the system to process load spikes asynchronously without hitting concurrency limits. Regarding cold starts, I optimized performance by reducing bundle sizes with esbuild and applying Provisioned Concurrency to business-critical processes. For security, I strictly follow the principle of least privilege using specific IAM Roles, limiting Lambda access to exact S3 buckets and prefixes. I manage deployment and monitoring through Infrastructure as Code (IaC) with the Serverless Framework integrated into GitHub Actions pipelines, while centralizing observability via CloudWatch and AWS X-Ray. This approach ensures an auditable, secure, and highly scalable environment that aligns perfectly with project operational costs.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    José Torres: In a project for an online casino, we faced a conflict between data consistency and perceived performance. The "ideal" technical solution was to validate every action directly with the server to ensure 100% balance accuracy, but this created frustrating latency for users with unstable connections. To address this pain point, we pivoted to an Optimistic UI architecture using Zustand and TanStack Query. This allowed the interface to react instantly (under 100ms) by predicting the success of an action on the client-side, while the actual synchronization happened in the background with a robust rollback system in case of server-side failure. We measured success through both product metrics and technical performance. We saw a 25% increase in session duration as the visual fluidity eliminated the friction that was causing user drop-off. Technically, we monitored the "rollback rate" to ensure our prediction logic remained accurate and optimized our Interaction to Next Paint (INP), moving from a critical state to "Good" in Core Web Vitals. This pivot proved that prioritizing immediate feedback over absolute synchronous consistency was essential for maintaining user engagement.
  • pablo guzman strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    pablo guzman: well the most complex project I have work was con travel tourist application I used nest js to configure the api service and I choose it because of bussiness requirement and because nest js offers structure and fast building and scalable structure (dependency injection, DTO´s, ), on the front end site we used next js because the main goal of the prject was to get the max score in SEO also it was the first time I implemented tanstack query and it help to abstract and hanlde better most of the code when dealing with API and cahe and navigation reducing stress on services (DB, http request)
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    pablo guzman: sure I have use AWS just once and most of experience for this kind of deployment is on vercel and railway, in both I had config the server turn off and on if there were no requests in certain period of time, I handle most of my deployment via github and sync it with the services, and the monitoring via the service dashboard
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    pablo guzman: there was a time when our team was asked to deploy a react native app to the ios and android store but it took more time than expected and time was upon us, so the team proposed to implement the app as a Progressive web app (PWA)instead so we got the approval and we were able to deliver this way and forget about ios and android issues for new apps
  • Johan Marcel Fernández Castillo strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Johan Marcel Fernández Castillo: Sure! I recently led a project where we built a high-performance platform using Next.js, TypeScript, and Tailwind CSS with shadcn/ui. The main challenge was balancing a rich UI with fast load times. I architected the solution using React Server Components (RSC) for the majority of the application. This significantly improved our SEO and reduced the JavaScript bundle size since most of the logic stayed on the server. I reserved Client Components strictly for interactive elements like complex forms or real-time filters. For the backed: Next.js API Routes Instead of a separate Express server, I leveraged Next.js API Routes. Why: It simplified the architecture and deployment. Since the frontend and backend share the same TypeScript definitions, I ensured full type safety across the entire stack. This reduced cold start latency and infrastructure costs. By using Server Actions, I eliminated the need for most 'useEffect' data fetching, making the data flow more predictable and much easier to maintain.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Johan Marcel Fernández Castillo: I’ve used AWS Lambda and S3 primarily to build event-driven, asynchronous workflows. I implemented a pipeline where users uploaded large assets to an S3 bucket. This triggered a Lambda function to process the files (optimization and metadata extraction) before updating our database. To handle spikes in traffic, I optimized the Lambda package size by keeping dependencies minimal, which significantly reduced cold start times. For latency-critical paths, I used Provisioned Concurrency. I followed the principle of 'least privilege.' Instead of broad permissions, I created specific IAM roles for the Lambda with access only to the necessary S3 prefixes and specific DynamoDB tables. I managed the infrastructure using AWS SAM to ensure reproducible environments. For monitoring, I relied on CloudWatch Logs and set up Alarms to notify the team of any execution failures or duration timeouts
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Johan Marcel Fernández Castillo: A good example was in an HOS (Hours of Service) system I worked on for a company in the sugar industry. The ideal technical solution was strict validation on every step driver license checks, freight direction validation, and route confirmation with separate API calls. Architecturally, it was clean, but users hated it because dispatchers needed speed, not extra friction. The pain point was that operators were losing time with repeated validations, unnecessary re-renders, and form resets. I changed the flow using React, TypeScript, Apollo Client, and redux-form so validations only happened when values actually changed for example, the driver license endpoint only ran if the license number was modified. I also optimized state handling to avoid unnecessary re-renders and duplicate requests. The backend team preferred stricter validation, but we prioritized the user experience while keeping critical checks. We measured success by faster form completion, fewer support complaints, and fewer repeated API calls. The best proof was that users stopped using manual workarounds and trusted the system more
  • Eyder Garcia medium fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Eyder Garcia: Well to start to use any of state management correctly I first clarify that the SSR is used for almost all components developments or pages because this is very good and respect best practices to reach out a better performance regardless data retrieved and SEO also its very useful to use ISR in order to keep editing pages or piece of codes without rebuilding the entire page or code. Regarding Backend Architecture well both depends or scalabily product and also how many teams collaborate with the project. I mean if it means about a product or project that not scale in such a big cases we can start with Monolitich Modular Pattern Architecture since its very clean and more understand for small-medium project and i use Express.js Now for Nest.js cases I’ll prefer using Microservice Pattern Architecture since the framework its very friendly and stable and provide many resource and its very accesible for this such of thing but again it depends of how many people are collaborated with the project and also if the project its a very scaleable and complexity product and needs to be indenpendently.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Eyder Garcia: About AWS in general i just have knowledge and bit of experience but i’ve use S3, EC2 for personal projects. its not difficult to me to learn new topics or standard I have techniques and skills to adapt and fast learning. For monitoring i can use the monitorint that provided the AWS platform and also Sentry for backend and frontend.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Eyder Garcia: I can stablish a kind of comparative table to find out the specific each of feature that belongs to each approach and then I analizy by exploring the key differencies, the scope, disadvantages and also make a research using AI and analyze the comparison
  • ANGEL VILLAVICENCIO strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    ANGEL VILLAVICENCIO: Sure. One of the main projects I worked on was a SaaS marketplace built with TypeScript, Next.js, and Node.js on AWS. On the frontend, we used SSR and ISR with Next.js to balance performance and dynamic data. A key challenge was handling frequently updated content without losing performance, so we combined ISR with client-side fetching using React Query. For state management, we avoided Redux and used React Query + local state, which simplified caching and improved performance. On the backend, we used NestJS for its modular architecture and scalability. The system was deployed on AWS using a serverless approach with Lambda, API Gateway, and S3 for storage. We integrated SQS for async processing (e.g., image handling, notifications) and used RDS with optimized queries for relational data. We also leveraged CloudFront for CDN and improved latency, and implemented infrastructure as code for consistent deployments. This allowed us to scale efficiently, decouple services, and maintain high performance under increasing load.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    ANGEL VILLAVICENCIO: Sure. I’ve worked extensively with serverless workflows using AWS Lambda and S3 in production environments. In one project, we built a pipeline for processing user-uploaded images. Files were uploaded to S3, which triggered Lambda functions for validation, transformation, and metadata storage. To handle scalability during peak traffic, we designed the system to be fully event-driven using S3 + SQS, allowing us to buffer and process jobs asynchronously without overloading the system. To mitigate cold starts, we kept Lambdas lightweight, optimized dependencies, and split functions by responsibility. For critical paths, we also tuned memory allocation to improve execution time. On the security side, we used IAM roles with least-privilege access, ensuring each Lambda only had permissions to the specific S3 buckets or services it needed. For deployment, we used infrastructure as code (AWS CDK) with CI/CD pipelines for consistent releases. For monitoring, we relied on CloudWatch for logs, metrics, and alerts, and also used AWS X-Ray for distributed tracing to analyze latency and debug issues across services. Additionally, in S3 we implemented lifecycle rules to optimize long-term storage costs, automatically transitioning objects to cheaper storage classes based on usage patterns. This setup allowed us to scale efficiently while maintaining reliability, observability, and cost efficiency.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    ANGEL VILLAVICENCIO: Sure, in one project, we initially had multiple systems running under a monolithic architecture. They were stable and working well, but as the business grew, the setup became costly and harder to scale. From a technical perspective, the ideal approach was to take time to analyze all the systems, define proper boundaries, and redesign the architecture into well-structured services. However, the business needed quick results and wasn’t willing to wait for a full redesign. The conflict was between building a clean, long-term scalable architecture versus delivering immediate improvements. We took an incremental approach. Instead of redesigning everything upfront, we identified the most critical systems and started decoupling them into independent services, keeping future scalability in mind. This allowed us to deliver value quickly while progressively improving the architecture. The trade-off was not having the full picture from the start, which required adjustments later. However, it helped us gain stakeholder trust and evolve the system without disrupting operations. We measured success through improved performance in key flows, reduced load on critical components, and faster feature delivery.
  • Ronaldo Sergio Jara Condori strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Ronaldo Sergio Jara Condori: One of my big tech challenge was creating an AI application called "Nawa method". I needed to choose all the tech stack and build all the architecture by myself, in this occasion I was evaluating NestJs and Express for backend , but comparing these alternatives I chose Nest over Express for two main reasons: - First, it has its own structured architecture, its very easy to follow and you don't have to create and new special architecture. And also is easier to follow for new team members, and it is very helpful specially when you want to grow your team. - Second, manage concurrency is easier than other frameworks, so on an application where you want to implement AI is very important to have a tool that helps with that. To frontend, I choose React with React context, the challenge here was how to structure the global state because the application was planing to have a lot of interactivity and gamification so it was a real challenge but I try to divide responsibilities creating different stores managing just one thing instead of have a huge one and it helps a lot to the develoment.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Ronaldo Sergio Jara Condori: Honestly, I don't have hands-on experience with AWS Lambda or S3. Infrastructure and cloud services haven't been my main focus so far. What I do have some experience with is monitoring and observability. I'll choose OpenTelemetry for monitoring and manage traces, logs and metrics. I think it's becoming the standard and it integrates well with most monitoring tools. For error tracking I've used Sentry, which I find very reliable for catching and diagnosing issues in production.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Ronaldo Sergio Jara Condori: Once I was building an MVP for a client that shifted mid-way from a CMS with AI to a completely different product — AI chatbots. The ideal solution would've been to remove the old code entirely, but given the timeline I kept both systems coexisting in the same repo, heavily commented the old code and moved forward. It created technical debt, but it was the necessary call. What saved me was having built a solid design system upfront — the UI stayed mostly intact, changes were mainly in routes and larger components. The client got their MVP on time, that was the measure that mattered.
  • Jean Carlos Vega Vidarte strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Jean Carlos Vega Vidarte: In a recent project, I contributed to a financial product affiliation platform using TypeScript, React with Next.js, and a Node.js backend. A key challenge was managing complex, interdependent state (coupons, campaigns, bank accounts), which I handled using Redux to ensure predictable updates and consistency across multi-step flows. On the backend, we adopted a microservices architecture with NestJS instead of Express due to its structured architecture, dependency injection, and TypeScript-first design, which improved maintainability, scalability, and team onboarding while enabling clear separation of domains like payments and user management.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Jean Carlos Vega Vidarte: I’ve implemented a serverless workflow using AWS Lambda and Amazon S3 to process PDF invoices that were scraped by another service. When a file was stored in S3, it triggered a Lambda function that handled parsing and processing. While the workload didn’t involve extreme scalability challenges, I still designed the flow to be event-driven and stateless, allowing it to scale automatically with incoming uploads. For security, I configured AWS Identity and Access Management roles with least-privilege access, ensuring the function could only read from the specific S3 bucket and write to required resources. For deployment, we used AWS CloudFormation to define and automate the provisioning of Lambdas, permissions, and triggers, which improved consistency across environments. Monitoring and logging were handled through CloudWatch, allowing us to track execution logs and debug issues when needed.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Jean Carlos Vega Vidarte: In a financial application, users reported they were unable to delete certain objects from the admin panel, which was blocking operational workflows. After investigating, I found the root cause in our multi-tenant architecture: we had foreign key relationships spanning different database schemas, and the ORM couldn’t reliably resolve those dependencies. The “ideal” solution would have been to fully normalize and strictly enforce cross-schema relationships, but that approach wasn’t practical given the ORM limitations and the immediate user impact. Instead, I made a pragmatic architectural pivot by constraining foreign keys to specific schemas and redesigning the deletion process—moving heavy dependency cleanup into an asynchronous workflow. This introduced some additional complexity in terms of eventual consistency, but it allowed deletions to complete reliably and without blocking the UI. We measured success by a drop in deletion-related errors, faster response times in the admin panel, and fewer support tickets from users experiencing this issue.
  • Carlos Hernández strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Carlos Hernández: In our digital signature application, we had a flow in which the user could scan a QR code through the cellphone to sign more easily We had to take the decision to see how to notify the desktop app that the user had signed by the phone. We decided using websockets instead of using long-polling as it's a more performant solution We used the socket-io library instead of a native WebSockets solutions as the rooms and namespaces management is easier that way. With this decision, we gained more performance on both the frontend and the backend sides
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Carlos Hernández: I created an API for scraping real state pages using a stack of API Gateway + Lambda and S3 for saving the images of the properties I created a Github Action workflow to handle the deployment of the Lambda funcion and improve the scability Of course, I had to create IAM roles for the Lambda function to access the S3 bucket For monitoring, I used CloudWatch logs and X-Ray logs
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Carlos Hernández: Sometimes deadlines are so short that doesn't give the team time to optimize solutions so it's always important to find spaces for fixing technical debt Once the deployment of the feature was finished, we detected a bit of slowness in some of the services We took ourselves some time to watch this, and we managed to reduce the response time and gained a lot of performance
Moons johan.mosquera@futuralabs.io11111
1
  • Giuseppe strong fit
    View conversation
    Sophie: ¿Cuál es tu experiencia previa en soporte dental?
    Giuseppe: Correcto, 5 anos
    Sophie: ¿Qué herramientas o software has utilizado en el soporte dental?
    Giuseppe: Hubspot CRM
    Sophie: ¿Cómo manejas situaciones de estrés con pacientes difíciles?
    Giuseppe: Con paciencia. Intento resolver yo con criterio, y si la situacion escala, contacto mi jefe
    Sophie: ¿Tienes disponibilidad para trabajar fines de semana o turnos irregulares?
    Giuseppe: Si
    Sophie: ¿Cómo te mantienes actualizado sobre las prácticas y tecnologías dentales?
    Giuseppe: Leyendo informacion en Perplexity
    Sophie: ¿Por qué quieres trabajar en esta clínica dental en particular?
    Giuseppe: Porque me interesa mucho su marca
    Sophie: ¿Cuál es tu disponibilidad para iniciar si fueras seleccionado?
    Giuseppe: Desde hoy
Moonspeople@moonsdental.com222
1
  • Cynthia Berenice Pelayo Moreno strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Cynthia Berenice Pelayo Moreno: ChatGPT, Gemini, Canva.
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Cynthia Berenice Pelayo Moreno: Notion
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Cynthia Berenice Pelayo Moreno: Estoy tomando un curso de power BI pero no lo he terminado aún.
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Cynthia Berenice Pelayo Moreno: En una semana en lo que entrego mi puesto actual.
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Cynthia Berenice Pelayo Moreno: Sí, sin inconveniente.
B4talentrbranco@b4talent.com.br455
CompanyAdmin EmailRegistered Audiences CreatedCandidates Added Campaigns CreatedCandidates Invited Actually SentCompleted
GrowthStagenihir@growthstage.aiNew
Talentlymartin.delaflor@talently.techNew14214242
13
  • Kevin Abel Brito Díaz strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Kevin Abel Brito Díaz: One production-grade project that comes to mind was a large e-commerce initiative for Paris CL (Cencosud), where I worked as part of the engineering team delivering scalable customer-facing features and backend integrations. The frontend stack was based on React, and the team selected this ecosystem because most developers already had strong experience with it, which accelerated delivery speed and reduced onboarding friction. We also benefited from the maturity of the ecosystem and strong community support. On the backend, we chose NestJS over Express because the project required a more structured and scalable architecture. NestJS provided clear module separation, dependency injection, decorators, and better maintainability for a growing codebase with multiple contributors. This was especially valuable as the platform evolved and new services were added. One of the main technical challenges was state consistency and integration between frontend flows and backend services, particularly in e-commerce processes where pricing, promotions, and inventory needed to stay synchronized. We addressed this by keeping frontend state predictable and centralizing business logic in backend services instead of duplicating rules on the client side. From an engineering perspective, NestJS helped standardize patterns across the team, improve code organization, and simplify testing. The final outcome was faster collaboration, easier maintenance, and a platform architecture better prepared for future growth.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Kevin Abel Brito Díaz: I’ve worked with AWS services in environments where serverless components were used to support integrations, file processing, and event-driven workflows. A representative example involved handling file uploads and automated processing flows using Amazon S3 as the storage layer and Lambda functions to trigger validations and downstream processing after new files were uploaded. One important challenge was scalability during peak usage periods, where multiple files could be uploaded simultaneously. The advantage of Lambda was the ability to scale automatically based on incoming events, which reduced the need to manage dedicated infrastructure. Regarding cold starts, we minimized impact by keeping functions lightweight, reducing unnecessary dependencies, and designing logic to execute efficiently. For time-sensitive flows, we also paid attention to runtime selection and package size. From a security standpoint, we followed least-privilege principles with IAM roles, ensuring each Lambda function only had access to the exact S3 buckets or AWS resources required. For deployment, we used structured CI/CD workflows with version-controlled environments, which helped keep releases consistent and traceable. For monitoring, we relied on centralized logs and execution metrics to track failures, duration, and retries, allowing quick troubleshooting and continuous optimization. Overall, my focus in these architectures has always been reliability, maintainability, and operational efficiency.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Kevin Abel Brito Díaz: One example that comes to mind was during an e-commerce project where users were experiencing friction during checkout because some product availability and promotional data were not updating fast enough in the purchase flow. From a technical perspective, the “ideal” solution was to build a more complete real-time distributed architecture with multiple synchronized services and deeper event-driven communication. However, that approach required more implementation time and added unnecessary complexity for the immediate business need. The necessary user-focused solution was to simplify the architecture and prioritize response speed and consistency in the checkout experience. We reworked critical API flows, optimized caching rules, and centralized some business validations server-side to reduce mismatches between frontend data and backend responses. The result was a smoother checkout process, fewer cart abandonment issues related to pricing or stock inconsistencies, and faster page interactions during peak traffic periods. We measured success through lower incident reports, improved response times, and better conversion behavior in the affected checkout steps. That experience reinforced an important principle for me: elegant architecture matters, but solving the user’s pain point at the right time matters more.
  • MARCOS LEBRON strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    MARCOS LEBRON: One production-grade project I worked on was the modernization of a healthcare prescription platform. The existing system had a legacy frontend and backend, and the goal was to move to a more scalable architecture with a modern frontend, stronger api contracts, better validation, and a foundation that could support multi-tenant usage for doctors, clinics, pharmacies, and administrators. One of the main technical challenges was managing the prescription flow. it was not just a simple form. The user had to select a patient, choose medications, validate prescription rules, attach doctor information, generate a pdf and preserve enough state so the experience felt fast and reliable. I treated that flow almost like a prescription session where the frontend state represented the current draft while the backend remained the source of truth. for State management, I avoided putting everything into a global store. Instead, I separated the state into different layers. Local ui state stayed inside components, form state was handled closer to the form layer and server state was handled through api driven data fetching patterns. The idea was to avoid a giant global state object that becomes hard to maintain. For shared domain state, like the selected tenant or authenticated user context, I kept that at a higher level because many modules needed access to it. On the rendering side, Next.js was useful because not every page needed the same strategy. Public or semi-static content could benefit from server rendering or static generation, while authenticated areas were more dynamic and API-driven. For example, dashboard and prescription flows needed fresh data and authorization checks, while informational pages or configuration-driven views could be optimized differently. Another feature related to the backend was BFF for different microfrontend applications that I needed to structure. For backend architecture, if I had to choose between NestJS and Express in a similar production-grade environment, I would choose based on the size and complexity. Express is great when you need something lightweight. But if the API has multiple modules, validation rules, authentication, role-based access, and many developers working on it, I prefer NestJS because it gives you a stronger structure through modules, dependency injection, DTOs, guards, and interceptors.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    MARCOS LEBRON: I have experience working with AWS services like S3 and ECS, and also using Lambda and s3 for asynchronous processing. I implemented a file processiong pipeline. The user uploaded the file to s3 and then that s3 event triggered a lambda function. The lambda then validates the file, extracts metadata, transform it. the reason I like this aproach is that it keeps heavy work out of the main api requests. Instead of making the user wait while the backend processes the file, the api can upload a presigned url, store the file in s3 and let Lambda handle the processing in the background. For a cold start, I keep the lambda package small and avoid unnecessary dependencies. I prefer node or python runtimes is faster compared to java or c#. For security i applied least privilege if a function only reads from one S3 bucket, the policy should be scoped to that specific ARN nothing bigger than that, use aws secrets manager for credentials. for deployment i prefer to use CI CD and IAC, that could be SAM, serverless framework, or Terraform. The important thing is that lambda, events, IAM roles and env variables are defined in code and deployed constantly. for monitoring i use cloudwatch for logs and metrics, set alarms and deade letter queue so events are not lost.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    MARCOS LEBRON: A good example was a time at epam where i worked on a large enterprise frontend ecosystem with React, typescript, microfrontend, shared ui libraries, and bff layer. the user pain point was coming from both developers and product teams. we had around 25 applications, and teams were struggling with inconsistent frontend patterns, duplicated logic, slow local builds and repeated APO transformations logic across different microfrontends. So features took longer to deliver small ui changes required more coordination than they should, and onboarding new devs was slow, The ideal technical solution would probably have been to redesign the whole platform from scratch and make every application follow the same perfect architecture. But that was not realistic because these were production apps, teams were actively delivering features, and we could not pause the business to do a full rewrite. The difficult part was balancing technical idealism with what users actually needed. As engineers, we often want the cleanest possible architecture, but the users need faster builds, more consistency, and less friction immediately. So we chose an incremental migration instead of a risky full rewrite. so we made a more pragmatic architectural decision. Instead of trying to fix everything at once, we focused on the biggest pain points. We migrated the build system from Webpack to Vite, improved the shared UI libraries, and used the BFF layer to simplify how the frontend consumed backend data. We measured success with both technical and practical signals. The biggest concrete metric was that the Vite migration reduced build times by more than 50%. We also saw better developer experience, faster local feedback loops, more reuse from shared packages, and cleaner integration through the BFF. The takeaway for me was that architecture should solve real pain. In this case, the right decision was not to chase a perfect rewrite, but to make targeted architectural changes that improved delivery speed and system consistency while keeping production stable.
  • Andres Rocha
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Andres Rocha: Sure, one project that comes to mind is an investment design platform I worked on, where users could build and simulate capitalization tables in real time. Sounds straightforward until you factor in that every single input change needs to cascade instantly — dilution percentages, valuations, equity breakdowns — all live, all consistent across multiple investor profiles simultaneously. That was the real challenge. The core problem was state management. Cap tables are inherently interdependent, so a change in one cell isn't isolated — it triggers a chain of recalculations. We couldn't let the UI ever show a half-updated state, and we couldn't afford race conditions during simulation runs either. The solution was splitting the workload: heavy financial calculations went through RabbitMQ as background jobs, while the UI stayed responsive through optimistic updates and WebSocket-pushed confirmations. That separation of concerns was what kept the system from becoming a nightmare to reason about. On the backend, we evaluated Express versus NestJS, and the arguments for NestJS were pretty clear once we mapped out the requirements. Express gives you freedom, but on a financial platform with a larger team, that freedom tends to become inconsistency over time — everyone structures their modules differently, dependency management gets messy, and testing becomes painful. NestJS forces a modular, DI-based architecture from day one. That structure was worth more than any flexibility we were giving up. The TypeScript integration was another big factor. We were using strictly typed DTOs for every calculation service, and that caught a surprising number of potential bugs before they ever hit runtime. For financial logic, that's not a nice-to-have — it's a requirement. We also got clean patterns for the RabbitMQ microservice integration and WebSocket handling out of the box, rather than wiring all of that manually. In practice, we reduced boilerplate by around 30%, test coverage improved significantly, and the frontend team could mock services without needing to coordinate with us constantly — which made a real difference in sprint velocity. On the frontend, we used Next.js with ISR for the dashboard layer. Investment templates don't change by the second — they're read-heavy but do get updated periodically. ISR was the right fit: pages are pre-rendered statically, but when a user pushes an update, it propagates globally without a full rebuild. We kept LCP under 1.5s even on views with dense data visualizations, which was one of our harder non-functional requirements to hit.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Andres Rocha: Most of my hands-on Lambda and S3 experience comes from my time at Konfio, where we ran a fairly large AWS-based architecture handling payment orders, KYC flows, and document processing across multiple microservices. A concrete case where scalability became a real concern was around document ingestion in the KYC platform. We had periods where a high volume of users were uploading identity documents simultaneously — images, PDFs, that kind of thing. The initial approach of routing uploads through the Lambda directly became a bottleneck fast: the function was handling both the HTTP layer and the S3 write, which under load created timeouts and inconsistent responses. The fix was straightforward once we identified it — we moved to presigned URLs, so the Lambda's only job was generating a short-lived, scoped URL that the client used to upload directly to S3. That decoupled the load from the function entirely and let S3 handle the ingestion at its own scale, which it does well. On cold starts — this was something we had to think about carefully for Lambdas that sat in critical paths, like the ones triggering downstream processes after a document was validated. Cold start latency in those cases was noticeable. The approach we settled on was Provisioned Concurrency for the functions that needed to respond fast, keeping a set number of instances warm at all times. For less critical functions, we used scheduled EventBridge rules as a lighter alternative — just enough to keep them from going completely cold during business hours. The tradeoff with Provisioned Concurrency is cost, so we were selective about where we applied it. For IAM, the rule we followed was strict least-privilege per function. Each Lambda had its own role scoped to exactly what it needed — if a function's job was to read from a specific S3 bucket, its role only allowed s3:GetObject on that bucket's ARN, nothing broader. This mattered especially in a fintech environment where a misconfigured or accidentally triggered function shouldn't be able to touch data it has no business accessing. We also used resource-based policies on the S3 side to add a second layer — so even if a role was misconfigured, the bucket itself would reject unauthorized actions. Deployment was managed through Terraform for infrastructure definitions and CodePipeline for the CI/CD side. Lambda configurations, IAM roles, S3 bucket policies — all version-controlled and applied through the pipeline, which made environment parity between staging and production much easier to maintain. For monitoring, CloudWatch was the primary tool — we had log groups per function, metric filters for error rates and duration, and dashboards aggregating the most critical functions. For anything that needed deeper tracing across services, we used X-Ray to follow a request through the chain and identify where latency or failures were actually happening.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Andres Rocha: This one comes directly from the KYC platform at Konfio, and it's a good example of how skipping proper discovery with the actual end user can force a significant architectural rework down the line. The platform was originally built to support a document review flow for internal KYC analysts — the people responsible for validating loan applications. The initial design was straightforward: reviewers could see the most recent document uploaded by the applicant for each required field. That seemed reasonable at the time, but it turned out to be a critical gap. As the volume of applications grew, the team started flagging an unusual number of false rejections. Analysts were rejecting applications that should have been approved, and when we dug into the cases, a pattern emerged. Applicants would upload a document, it would get flagged for some reason — expired ID, wrong format, incomplete information — and they'd upload a new version. But the reviewer had no visibility into the previous submissions. They couldn't tell whether the issue had actually been corrected or whether the new document introduced a different problem. Without that context, decisions were being made on incomplete information. There was also a separate but related problem we uncovered during the same investigation. Sales reps, who were often sitting with applicants during the onboarding process, were sometimes uploading documents from their own accounts or pushing files through unofficial channels when something wasn't going through correctly. That was generating file inconsistencies on the platform side and bypassing the validation pipeline entirely. That category alone accounted for roughly 40% of rejection errors across all applications. The conflict was architectural. The system had been designed around a single-document-per-requirement model — there was no concept of document history or versioning at the data level. To fix this properly, we needed to restructure how documents were associated with applications: instead of a flat reference to the latest file, we needed a versioned relationship between each document type and all its submissions across the lifecycle of a request. That change wasn't isolated — it touched the storage layer, the review UI, the validation logic, and several downstream processes that consumed document state. The decision to do it properly rather than patch around it was driven by the KYC team themselves. Once we sat down with the analysts and mapped out how they actually worked through a review — what information they needed, what context they were missing — it became clear that a surface-level fix wouldn't hold. We needed to give them a tool that matched their actual workflow. We measured the outcome mostly through rejection error rates. False positives dropped significantly once reviewers had full document history available. The sales rep workaround problem was addressed through stricter upload attribution and validation at the point of submission, which closed that gap at the source. The broader result was a faster, more confident review process — fewer escalations, less back-and-forth with applicants, and a meaningful reduction in the time vendors were spending re-collecting documents that had already been submitted correctly.
  • Sergio Matamoros strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Sergio Matamoros: Developing the Ariel Scheduling System required a robust architecture using NestJS for a modular, type-safe backend and Next.js to leverage Incremental Static Regeneration for optimal performance. The core technical challenge involved managing real-time booking synchronization to prevent race conditions, which I solved by implementing optimistic updates via TanStack Query and WebSockets for immediate feedback and data integrity. This choice of a structured framework over a more minimal approach like Express allowed for seamless validation through DTOs and a scalable codebase that maintained high speed and reliability even as complex features like inventory management were integrated.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Sergio Matamoros: I have implemented serverless workflows using AWS Lambda and S3 to automate high-volume data processing, specifically for managing dynamic assets and inventory updates in a production environment. To address the challenge of cold starts, I optimized the execution environment by minimizing the deployment package size and utilizing Provisioned Concurrency for time-sensitive triggers, ensuring the system remained responsive during sudden traffic spikes. Scalability was managed through S3 event notifications that triggered Lambda functions asynchronously, while security was strictly enforced using the principle of least privilege with granular IAM roles, limiting function access only to the specific S3 prefixes and database rows required for the task. For deployment and monitoring, I utilized the AWS Serverless Application Model (SAM) alongside CloudWatch for real-time logging and performance metrics, allowing me to track execution duration and memory usage to ensure the workflow remained cost-effective and resilient under heavy loads.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Sergio Matamoros: While developing the Ariel Scheduling System, a significant user pain point emerged when clients in areas with unstable internet connectivity reported that the application would freeze or lose booking data during brief signal drops, leading to a "necessary" pivot toward a robust offline-first synchronization strategy. The "ideal" technical solution from a developer's perspective was a strictly server-authoritative model that ensured absolute data consistency by requiring a constant connection, but the "necessary" user solution demanded that the interface remain interactive even when offline. This forced an architectural shift to implement a local-first state using IndexedDB through TanStack Query's persistent cache, allowing users to continue the booking flow while the application queued requests locally. The conflict was resolved by creating a synchronization engine that handled background reconciliation once the connection was restored, prioritizing the user's intent over immediate server confirmation. Success was measured by a 35% decrease in abandoned booking sessions and a significant drop in support tickets related to "lost data," proving that sacrificing a purely synchronous architecture for a more resilient, client-side caching model was essential for real-world usability in the technician's specific operating environment.
  • Sebastián Josué Montero Madrigal strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Sebastián Josué Montero Madrigal: while my strongest production experience has been backend-focused, i've been applying full stack principles using TypeScript, React, and Node.js concepts i built a REST API whit ASP.NET Core for managing a motorcycle inventory system, using a layered architecture with controllers, services, and data access A key challenge was maintaining scalability and clean separation of concerns, which i solved by isolating business logic in a service layer and using dependency injection this is why i would choose NestJS over Express in Node.js, since it enforces a similar structured approach On the frontend side I've worked whit React focusing on state managmente and component design, ensuring efficient data flow and avoiding unnecessary re-renders Regarding Next.js i understand trade-offs like SSR vs ISR for frequently changing data, such as inventory status i would use SSR for consistency while ISR would be useful for less dynamic content to improve preformance overall i bring strong backend fundamentals and a growing practiacl undestanding of full stack development especially around architecture and data flow
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Sebastián Josué Montero Madrigal: i haven't yet implemented a full production workflow using AWS lambda and S3, but i have studied and practiced serverless architecture concepts and how they apply to scalable systems from my backend experience building REST APIs, i understand how to desing stateless services, which translates well to Lambda functions, for example in a system like my motorcycle inventory API, i would use Lambda to handle event-driven tasks such as processing updates or handling file uploads whit S3 used for storage regarding scalability one advantage of Lambda is it automatic scaling but im aware of challenges like cold starts to mitigate this i would minimize function size, reduce dependencies and consider strategies like keeping functions warm for latency-sensitive operations for security i undestand the importance of IAM roles and least-privilege access, each Lambda function should only have permissions strictly necessary, for example limited access to specific S3 buckets or actions for deplayment and monitoring i'm familiar with tools like AWS SAM or Serverless Framework for structured deployments, and CloudWatch for logging and performance monitoring, even though i'm still building hands-on experience with these tools, i have a solid understanding of how to apply them in real scenarios overall while my direct production experience whit AWS serverless is still growing, i have a strong foundation in backend architecture and i'm actively working on applying these concepts in could enviroments
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Sebastián Josué Montero Madrigal: in one of my backend projects i built a motorcycle inventory system where users needed to quickly view and update vehicle availability, initially i designed the system with a more "ideal" architecture-strict separation of layers normalized data structures, and multiple relational joins to ensure consistency and scalability however during testing i realized this approach introduced latency when retrieving data, especially for common user actions like listing available vehicles from a technical perspective, the design was clean and correct, but it didn't provide the responsiveness users needed the pivot was to optimize for the user experience rather than strict architectural purity, i introduced more efficient queries and simplified some data access patterns, even if that meant slightly relaxing normalization in certain areas this reduced response times significantly and made the system feel much more responsive the main conflict was betwwen maintraining a perfectly structured, theoretically ideal backend versus delivering fast, practical performance for end users, i chose to prioritize the user experience ehile still keeping the system maintainable to measure success i compared response times before and after the changes and validated that common operations became noticeably faster, additionally the system handled frequent queries more efficiently without affcting stability this experience reinforced the importance of balancing clean architecture with real world user needs, especially in performance-sensitive applications
  • José Torres strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    José Torres: For a recent high-demand project (called Project Windfall), I set up a monorepo with Turborepo using Next.js and NestJS. I chose the NestJS library over Express for the backend because, since it’s a system with complex business logic, I needed a modular and robust architecture that would facilitate dependency injection and long-term maintenance, avoiding the clutter that often arises in Express as the project scales. The biggest challenge was managing real-time state for thousands of concurrent users. To avoid compromising performance, I separated the logic: I used TanStack Query for server-side caching and Zustand for atomic global state. This, combined with a Redis layer on the backend and database index optimization, reduced response latency from 500 ms to less than 100 ms, enabling smooth updates without unnecessary re-renders. I used TypeScript to share data models between the backend and frontend within the monorepo, achieving end-to-end type safety, which virtually eliminated all integration errors. Ultimately, these decisions enabled the platform to handle critical traffic spikes while continuing to deliver a robust user experience. I employed the modular atomic design methodology to make the components more modular and scalable.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    José Torres: I've implemented serverless workflows using S3 as a trigger for AWS Lambda functions dedicated to file processing and data extraction. To handle scalability and prevent bottlenecks, I used Amazon SQS as a buffer, allowing the system to process load spikes asynchronously without hitting concurrency limits. Regarding cold starts, I optimized performance by reducing bundle sizes with esbuild and applying Provisioned Concurrency to business-critical processes. For security, I strictly follow the principle of least privilege using specific IAM Roles, limiting Lambda access to exact S3 buckets and prefixes. I manage deployment and monitoring through Infrastructure as Code (IaC) with the Serverless Framework integrated into GitHub Actions pipelines, while centralizing observability via CloudWatch and AWS X-Ray. This approach ensures an auditable, secure, and highly scalable environment that aligns perfectly with project operational costs.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    José Torres: In a project for an online casino, we faced a conflict between data consistency and perceived performance. The "ideal" technical solution was to validate every action directly with the server to ensure 100% balance accuracy, but this created frustrating latency for users with unstable connections. To address this pain point, we pivoted to an Optimistic UI architecture using Zustand and TanStack Query. This allowed the interface to react instantly (under 100ms) by predicting the success of an action on the client-side, while the actual synchronization happened in the background with a robust rollback system in case of server-side failure. We measured success through both product metrics and technical performance. We saw a 25% increase in session duration as the visual fluidity eliminated the friction that was causing user drop-off. Technically, we monitored the "rollback rate" to ensure our prediction logic remained accurate and optimized our Interaction to Next Paint (INP), moving from a critical state to "Good" in Core Web Vitals. This pivot proved that prioritizing immediate feedback over absolute synchronous consistency was essential for maintaining user engagement.
  • pablo guzman strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    pablo guzman: well the most complex project I have work was con travel tourist application I used nest js to configure the api service and I choose it because of bussiness requirement and because nest js offers structure and fast building and scalable structure (dependency injection, DTO´s, ), on the front end site we used next js because the main goal of the prject was to get the max score in SEO also it was the first time I implemented tanstack query and it help to abstract and hanlde better most of the code when dealing with API and cahe and navigation reducing stress on services (DB, http request)
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    pablo guzman: sure I have use AWS just once and most of experience for this kind of deployment is on vercel and railway, in both I had config the server turn off and on if there were no requests in certain period of time, I handle most of my deployment via github and sync it with the services, and the monitoring via the service dashboard
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    pablo guzman: there was a time when our team was asked to deploy a react native app to the ios and android store but it took more time than expected and time was upon us, so the team proposed to implement the app as a Progressive web app (PWA)instead so we got the approval and we were able to deliver this way and forget about ios and android issues for new apps
  • Johan Marcel Fernández Castillo strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Johan Marcel Fernández Castillo: Sure! I recently led a project where we built a high-performance platform using Next.js, TypeScript, and Tailwind CSS with shadcn/ui. The main challenge was balancing a rich UI with fast load times. I architected the solution using React Server Components (RSC) for the majority of the application. This significantly improved our SEO and reduced the JavaScript bundle size since most of the logic stayed on the server. I reserved Client Components strictly for interactive elements like complex forms or real-time filters. For the backed: Next.js API Routes Instead of a separate Express server, I leveraged Next.js API Routes. Why: It simplified the architecture and deployment. Since the frontend and backend share the same TypeScript definitions, I ensured full type safety across the entire stack. This reduced cold start latency and infrastructure costs. By using Server Actions, I eliminated the need for most 'useEffect' data fetching, making the data flow more predictable and much easier to maintain.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Johan Marcel Fernández Castillo: I’ve used AWS Lambda and S3 primarily to build event-driven, asynchronous workflows. I implemented a pipeline where users uploaded large assets to an S3 bucket. This triggered a Lambda function to process the files (optimization and metadata extraction) before updating our database. To handle spikes in traffic, I optimized the Lambda package size by keeping dependencies minimal, which significantly reduced cold start times. For latency-critical paths, I used Provisioned Concurrency. I followed the principle of 'least privilege.' Instead of broad permissions, I created specific IAM roles for the Lambda with access only to the necessary S3 prefixes and specific DynamoDB tables. I managed the infrastructure using AWS SAM to ensure reproducible environments. For monitoring, I relied on CloudWatch Logs and set up Alarms to notify the team of any execution failures or duration timeouts
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Johan Marcel Fernández Castillo: A good example was in an HOS (Hours of Service) system I worked on for a company in the sugar industry. The ideal technical solution was strict validation on every step driver license checks, freight direction validation, and route confirmation with separate API calls. Architecturally, it was clean, but users hated it because dispatchers needed speed, not extra friction. The pain point was that operators were losing time with repeated validations, unnecessary re-renders, and form resets. I changed the flow using React, TypeScript, Apollo Client, and redux-form so validations only happened when values actually changed for example, the driver license endpoint only ran if the license number was modified. I also optimized state handling to avoid unnecessary re-renders and duplicate requests. The backend team preferred stricter validation, but we prioritized the user experience while keeping critical checks. We measured success by faster form completion, fewer support complaints, and fewer repeated API calls. The best proof was that users stopped using manual workarounds and trusted the system more
  • Eyder Garcia medium fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Eyder Garcia: Well to start to use any of state management correctly I first clarify that the SSR is used for almost all components developments or pages because this is very good and respect best practices to reach out a better performance regardless data retrieved and SEO also its very useful to use ISR in order to keep editing pages or piece of codes without rebuilding the entire page or code. Regarding Backend Architecture well both depends or scalabily product and also how many teams collaborate with the project. I mean if it means about a product or project that not scale in such a big cases we can start with Monolitich Modular Pattern Architecture since its very clean and more understand for small-medium project and i use Express.js Now for Nest.js cases I’ll prefer using Microservice Pattern Architecture since the framework its very friendly and stable and provide many resource and its very accesible for this such of thing but again it depends of how many people are collaborated with the project and also if the project its a very scaleable and complexity product and needs to be indenpendently.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Eyder Garcia: About AWS in general i just have knowledge and bit of experience but i’ve use S3, EC2 for personal projects. its not difficult to me to learn new topics or standard I have techniques and skills to adapt and fast learning. For monitoring i can use the monitorint that provided the AWS platform and also Sentry for backend and frontend.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Eyder Garcia: I can stablish a kind of comparative table to find out the specific each of feature that belongs to each approach and then I analizy by exploring the key differencies, the scope, disadvantages and also make a research using AI and analyze the comparison
  • ANGEL VILLAVICENCIO strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    ANGEL VILLAVICENCIO: Sure. One of the main projects I worked on was a SaaS marketplace built with TypeScript, Next.js, and Node.js on AWS. On the frontend, we used SSR and ISR with Next.js to balance performance and dynamic data. A key challenge was handling frequently updated content without losing performance, so we combined ISR with client-side fetching using React Query. For state management, we avoided Redux and used React Query + local state, which simplified caching and improved performance. On the backend, we used NestJS for its modular architecture and scalability. The system was deployed on AWS using a serverless approach with Lambda, API Gateway, and S3 for storage. We integrated SQS for async processing (e.g., image handling, notifications) and used RDS with optimized queries for relational data. We also leveraged CloudFront for CDN and improved latency, and implemented infrastructure as code for consistent deployments. This allowed us to scale efficiently, decouple services, and maintain high performance under increasing load.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    ANGEL VILLAVICENCIO: Sure. I’ve worked extensively with serverless workflows using AWS Lambda and S3 in production environments. In one project, we built a pipeline for processing user-uploaded images. Files were uploaded to S3, which triggered Lambda functions for validation, transformation, and metadata storage. To handle scalability during peak traffic, we designed the system to be fully event-driven using S3 + SQS, allowing us to buffer and process jobs asynchronously without overloading the system. To mitigate cold starts, we kept Lambdas lightweight, optimized dependencies, and split functions by responsibility. For critical paths, we also tuned memory allocation to improve execution time. On the security side, we used IAM roles with least-privilege access, ensuring each Lambda only had permissions to the specific S3 buckets or services it needed. For deployment, we used infrastructure as code (AWS CDK) with CI/CD pipelines for consistent releases. For monitoring, we relied on CloudWatch for logs, metrics, and alerts, and also used AWS X-Ray for distributed tracing to analyze latency and debug issues across services. Additionally, in S3 we implemented lifecycle rules to optimize long-term storage costs, automatically transitioning objects to cheaper storage classes based on usage patterns. This setup allowed us to scale efficiently while maintaining reliability, observability, and cost efficiency.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    ANGEL VILLAVICENCIO: Sure, in one project, we initially had multiple systems running under a monolithic architecture. They were stable and working well, but as the business grew, the setup became costly and harder to scale. From a technical perspective, the ideal approach was to take time to analyze all the systems, define proper boundaries, and redesign the architecture into well-structured services. However, the business needed quick results and wasn’t willing to wait for a full redesign. The conflict was between building a clean, long-term scalable architecture versus delivering immediate improvements. We took an incremental approach. Instead of redesigning everything upfront, we identified the most critical systems and started decoupling them into independent services, keeping future scalability in mind. This allowed us to deliver value quickly while progressively improving the architecture. The trade-off was not having the full picture from the start, which required adjustments later. However, it helped us gain stakeholder trust and evolve the system without disrupting operations. We measured success through improved performance in key flows, reduced load on critical components, and faster feature delivery.
  • Ronaldo Sergio Jara Condori strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Ronaldo Sergio Jara Condori: One of my big tech challenge was creating an AI application called "Nawa method". I needed to choose all the tech stack and build all the architecture by myself, in this occasion I was evaluating NestJs and Express for backend , but comparing these alternatives I chose Nest over Express for two main reasons: - First, it has its own structured architecture, its very easy to follow and you don't have to create and new special architecture. And also is easier to follow for new team members, and it is very helpful specially when you want to grow your team. - Second, manage concurrency is easier than other frameworks, so on an application where you want to implement AI is very important to have a tool that helps with that. To frontend, I choose React with React context, the challenge here was how to structure the global state because the application was planing to have a lot of interactivity and gamification so it was a real challenge but I try to divide responsibilities creating different stores managing just one thing instead of have a huge one and it helps a lot to the develoment.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Ronaldo Sergio Jara Condori: Honestly, I don't have hands-on experience with AWS Lambda or S3. Infrastructure and cloud services haven't been my main focus so far. What I do have some experience with is monitoring and observability. I'll choose OpenTelemetry for monitoring and manage traces, logs and metrics. I think it's becoming the standard and it integrates well with most monitoring tools. For error tracking I've used Sentry, which I find very reliable for catching and diagnosing issues in production.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Ronaldo Sergio Jara Condori: Once I was building an MVP for a client that shifted mid-way from a CMS with AI to a completely different product — AI chatbots. The ideal solution would've been to remove the old code entirely, but given the timeline I kept both systems coexisting in the same repo, heavily commented the old code and moved forward. It created technical debt, but it was the necessary call. What saved me was having built a solid design system upfront — the UI stayed mostly intact, changes were mainly in routes and larger components. The client got their MVP on time, that was the measure that mattered.
  • Jean Carlos Vega Vidarte strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Jean Carlos Vega Vidarte: In a recent project, I contributed to a financial product affiliation platform using TypeScript, React with Next.js, and a Node.js backend. A key challenge was managing complex, interdependent state (coupons, campaigns, bank accounts), which I handled using Redux to ensure predictable updates and consistency across multi-step flows. On the backend, we adopted a microservices architecture with NestJS instead of Express due to its structured architecture, dependency injection, and TypeScript-first design, which improved maintainability, scalability, and team onboarding while enabling clear separation of domains like payments and user management.
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Jean Carlos Vega Vidarte: I’ve implemented a serverless workflow using AWS Lambda and Amazon S3 to process PDF invoices that were scraped by another service. When a file was stored in S3, it triggered a Lambda function that handled parsing and processing. While the workload didn’t involve extreme scalability challenges, I still designed the flow to be event-driven and stateless, allowing it to scale automatically with incoming uploads. For security, I configured AWS Identity and Access Management roles with least-privilege access, ensuring the function could only read from the specific S3 bucket and write to required resources. For deployment, we used AWS CloudFormation to define and automate the provisioning of Lambdas, permissions, and triggers, which improved consistency across environments. Monitoring and logging were handled through CloudWatch, allowing us to track execution logs and debug issues when needed.
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Jean Carlos Vega Vidarte: In a financial application, users reported they were unable to delete certain objects from the admin panel, which was blocking operational workflows. After investigating, I found the root cause in our multi-tenant architecture: we had foreign key relationships spanning different database schemas, and the ORM couldn’t reliably resolve those dependencies. The “ideal” solution would have been to fully normalize and strictly enforce cross-schema relationships, but that approach wasn’t practical given the ORM limitations and the immediate user impact. Instead, I made a pragmatic architectural pivot by constraining foreign keys to specific schemas and redesigning the deletion process—moving heavy dependency cleanup into an asynchronous workflow. This introduced some additional complexity in terms of eventual consistency, but it allowed deletions to complete reliably and without blocking the UI. We measured success by a drop in deletion-related errors, faster response times in the admin panel, and fewer support tickets from users experiencing this issue.
  • Carlos Hernández strong fit
    View conversation
    Sophie: Walk me through a production-grade project where you used TypeScript, React/Next.js, and Node.js. Beyond the scope, focus on a complex technical challenge—such as state management, SSR/ISR optimization, or backend architecture. Why did you choose specific libraries (e.g., NestJS vs. Express), and how did those choices impact the final outcome?
    Carlos Hernández: In our digital signature application, we had a flow in which the user could scan a QR code through the cellphone to sign more easily We had to take the decision to see how to notify the desktop app that the user had signed by the phone. We decided using websockets instead of using long-polling as it's a more performant solution We used the socket-io library instead of a native WebSockets solutions as the rooms and namespaces management is easier that way. With this decision, we gained more performance on both the frontend and the backend sides
    Sophie: Describe your experience implementing serverless workflows using AWS Lambda and S3. Can you detail a specific instance where you had to handle scalability, cold starts, or security (IAM roles)? How did you manage deployment and monitoring for these services?
    Carlos Hernández: I created an API for scraping real state pages using a stack of API Gateway + Lambda and S3 for saving the images of the properties I created a Github Action workflow to handle the deployment of the Lambda funcion and improve the scability Of course, I had to create IAM roles for the Lambda function to access the S3 bucket For monitoring, I used CloudWatch logs and X-Ray logs
    Sophie: Tell me about a time a specific user pain point dictated a difficult technical pivot or architectural decision. What was the conflict between the 'ideal' technical solution and the 'necessary' user solution? How did you measure the success of your fix?
    Carlos Hernández: Sometimes deadlines are so short that doesn't give the team time to optimize solutions so it's always important to find spaces for fixing technical debt Once the deployment of the feature was finished, we detected a bit of slowness in some of the services We took ourselves some time to watch this, and we managed to reduce the response time and gained a lot of performance
Kickpothassan@kickpot.appNew1
Moons johan.mosquera@futuralabs.io11111
1
  • Giuseppe strong fit
    View conversation
    Sophie: ¿Cuál es tu experiencia previa en soporte dental?
    Giuseppe: Correcto, 5 anos
    Sophie: ¿Qué herramientas o software has utilizado en el soporte dental?
    Giuseppe: Hubspot CRM
    Sophie: ¿Cómo manejas situaciones de estrés con pacientes difíciles?
    Giuseppe: Con paciencia. Intento resolver yo con criterio, y si la situacion escala, contacto mi jefe
    Sophie: ¿Tienes disponibilidad para trabajar fines de semana o turnos irregulares?
    Giuseppe: Si
    Sophie: ¿Cómo te mantienes actualizado sobre las prácticas y tecnologías dentales?
    Giuseppe: Leyendo informacion en Perplexity
    Sophie: ¿Por qué quieres trabajar en esta clínica dental en particular?
    Giuseppe: Porque me interesa mucho su marca
    Sophie: ¿Cuál es tu disponibilidad para iniciar si fueras seleccionado?
    Giuseppe: Desde hoy
Moonspeople@moonsdental.com11812017
8
  • David Alejandro López Martínez strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    David Alejandro López Martínez: Actualmente utilizo herramientas de IA como ChatGPT para análisis de información, redacción y optimización de procesos. Me apoyo en Excel y Google Sheets para manejo de datos, y en Notion y Trello para organización y seguimiento de proyectos. También utilizo Canva para crear contenido visual de forma ágil, y estoy comenzando a integrar herramientas como Power BI y Zapier para análisis y automatización, con el objetivo de mejorar la eficiencia y la toma de decisiones.
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    David Alejandro López Martínez: Para gestionar procesos y tareas utilizo herramientas como Notion y Trello, donde organizo actividades por prioridades, fechas y seguimiento. Me apoyo en Google Calendar para la planificación diaria y recordatorios. Además, suelo trabajar con listas de control y revisiones periódicas para asegurar que no se quede nada pendiente. También utilizo herramientas como ChatGPT para estructurar procesos y optimizar la organización, lo que me ayuda a mantener un flujo de trabajo ordenado y eficiente.
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    David Alejandro López Martínez: He utilizado principalmente Excel y Google Sheets para organizar información, analizar métricas y dar seguimiento a datos mediante tablas dinámicas, filtros y reportes
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    David Alejandro López Martínez: Podría incorporarme de inmediato o en un plazo muy corto, ya que actualmente cuento con disponibilidad.
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    David Alejandro López Martínez: Si, claro
  • Berenice Gómez strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Berenice Gómez: ChatGPT (código), Google Gemini
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Berenice Gómez: Principalmente paquete Microsoft Excel, access, ERP
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Berenice Gómez: Excel, Power BI, y pocas veces SQL
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Berenice Gómez: Disponibilidad inmediata
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Berenice Gómez: Si
  • Emerson Kevin Martinez Ortiz strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Emerson Kevin Martinez Ortiz: Claude Gemini Chat Gpt Copilot Notion
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Emerson Kevin Martinez Ortiz: A demás de herramientas de IA utilizo herramientas tradicionales para asegurar la gestión solidaria de procesos y tareas Trabajo con notion para la organización de proyectos, Excel para conteo y validación de reportes y Outlook Teams para comunicación
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Emerson Kevin Martinez Ortiz: Consultas de información en sql para trabajar en su análisis en Excel utilizando funciones de búsqueda, lógicas y de cálculos, además utilizó automatizaciones en vba para después darle presentar a dirección en dashboards interactivos en power bi
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Emerson Kevin Martinez Ortiz: De inmediato
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Emerson Kevin Martinez Ortiz: Sí claro sin tema
  • Noe Perea strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Noe Perea: Principalmente me apoyo de Copilot para mis análisis BI y Anaplan
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Noe Perea: Notion y Asana
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Noe Perea: Power Bi, Tableau, SQL, Power Query y Phyton
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Noe Perea: Inmediatamente
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Noe Perea: Sin temas
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Noe Perea: Si
  • Isra Rendon strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Isra Rendon: Chat gpt , GitHub y Pytorch, únicamente
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Isra Rendon: Asana y Notion Mas que nada esos para hacer bitácoras o hacer cualquier correccion
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Isra Rendon: Solamente Phyton, Microsoft y tableau un poco
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Isra Rendon: pero el que más he utilizado es phyton
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Isra Rendon: Cuando la empresa lo indique
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Isra Rendon: Si
  • Samuel Jiménez strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Samuel Jiménez: Claro que sí
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Samuel Jiménez: Actualmente íntegro herramientas como Copilot para uso empresarial, la cual implementó para la gestión de análisis de tickets y su gestión, así como trabajos repetitivos o análisis de datos. Del lado personal, cuento con claude Code, la cual implementó para flujos de trabajo n8n, chatbots y gestión de documentaciónm
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Samuel Jiménez: Actualmente utilizo JIRA y Service now como herramientas para la gestión de incidencias y requerimientos, pero me apoyo bastante en Obsidian, la cual me ha sido de bastante utilidad para el mapeo de procesos empresariales y documentación técnica.
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Samuel Jiménez: Actualmente utilizo PowerBI, con ciertas funciones en pandas y matploblib (Python). En ciertos casos implemento el uso de la IA para la normalización y análisis de cierta información Anteriormente también he usado Matlab
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Samuel Jiménez: De inmediato
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Samuel Jiménez: Cuando ustedes requeiriesen Si
  • Efrain Ayala strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Efrain Ayala: Actualmente chatgpt principalmente
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Efrain Ayala: Paquetería de Office y/o plataformas empresarial por parte de la empresa que permiten gestionar planeación
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Efrain Ayala: Principalmente excel cuento con más experiencia
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Efrain Ayala: Aunque tengo experiencia usando herramientas de visualización como qlik y looker
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Efrain Ayala: De manera inmediata
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Efrain Ayala: Claro que sí, sin problema El horario de que hora a qué hora es ?
  • Cynthia Berenice Pelayo Moreno strong fit
    View conversation
    Sophie: Cuéntame qué herramientas de IA utilizas actualmente en tu trabajo diario?
    Cynthia Berenice Pelayo Moreno: ChatGPT, Gemini, Canva.
    Sophie: Dime que herramientas usas para gestionar procesos y tareas, hacer seguimientos y garantizar que no te quede nada por fuera?
    Cynthia Berenice Pelayo Moreno: Notion
    Sophie: Cuentame, que herramientas de analisis de datos has utilizado para organizar informacion, metricas, dashboards?
    Cynthia Berenice Pelayo Moreno: Estoy tomando un curso de power BI pero no lo he terminado aún.
    Sophie: ¿Cuándo podrías comenzar a trabajar si fueras seleccionado para este puesto?
    Cynthia Berenice Pelayo Moreno: En una semana en lo que entrego mi puesto actual.
    Sophie: Puedes trabajar todos los días presencial en nuestra sede en CDMX cerca a la estación del metro viaducto?
    Cynthia Berenice Pelayo Moreno: Sí, sin inconveniente.
B4talentrbranco@b4talent.com.br1101107
2
  • Carla Sousa strong fit
    View conversation
    Sophie: Qual a sua experiência com SAP ABAP? Pode me dar um exemplo de um projeto que você trabalhou?
    Carla Sousa: Sou abap há 14 anos. Minha última experiência foi num projeto da Trijay, cliente Santa Cruz. Era AMS! Mas anteriormente atuei em outras consultorias e projetos.
    Sophie: Como você identifica a causa-raiz de um problema tecnológico?
    Carla Sousa: Desculpe a demora, estava dirigindo. Sobre o problema, vejo o erro e tento reproduzir, debugo, analiso o código, se tem sujeira. Se algo foi transportado recentemente.
    Sophie: Como você organiza seu tempo em projetos com prazos difíceis?
    Carla Sousa: Tento priorizar o que destrava mais rápido e tem impacto no negócio, quebro em tarefas pequenas e vou entregando em sequência. Alinhando com o time pra evitar surpresa e reviso o plano todo dia.
    Sophie: Você já trabalhou com equipe multidisciplinar? Como se comunica com profissionais de diferentes áreas?
    Carla Sousa: Sim, muitas vezes. Na maioria das vezes com funcional falo de regra/processo, com técnico vou direto no detalhe. Valido entendimento, evito ruído e mantenho alinhamento constante.
    Sophie: Você está disponível para começar imediatamente ou tem um prazo em mente?
    Carla Sousa: Disponível pra início imediato!
    Sophie: Quais melhores práticas de ABAP você sempre tenta seguir em seu trabalho?
    Carla Sousa: Mantenho limpo, nomes claros, mantendo padrão. Reuso antes de reinventar, como BAPI, classes, CDS. Alinho performance, como select orreto, evitando loop desnecessário. Trato exceções e mensagens direito. Respeitar padrão SAP e evitar hardcode.
    Sophie: Você tem experiência com rotinas de recebimento Fiscal e Financeiro? Pode me contar um pouco?
    Carla Sousa: Sim. Atuei com MIRO/MIGO e integração MM/FI. Validação de impostos, CFOP, duplicatas e contabilização. Contas a pagar, atualizações e integrações. Ajustes em exits/BADIs e reports pra conferência e conciliação. Também tratei divergências entre pedido, nota e recebimento. Também com nova regra fiscal brasileira.
    Sophie: Pode me contar sobre uma situação em que teve que explicar um conceito técnico para alguém não-técnico?
    Carla Sousa: Simplifico a linguagem, uso exemplos do dia a dia e foco no impacto pro negócio. Evito termos técnicos e valido se a pessoa entendeu antes de seguir.
    Sophie: Você já trabalhou com integração de sistemas? Como foi essa experiência?
    Carla Sousa: Já atuei integrando SAP com outros sistemas, alinhando regras, formatos de dados e validações. Participei desde o desenho até testes e acompanhamento em produção. Também cuidei de erros, reprocesso e alinhamento com outras áreas pra manter tudo estável. Foi bem desafiador mas muito enriquecedor.
  • Marco Antonio strong fit
    View conversation
    Sophie: Qual a sua experiência com SAP ABAP? Pode me dar um exemplo de um projeto que você trabalhou?
    Marco Antonio: Vou listar alguns exemplos, mas se preferir, podemos conversar
    Sophie: Qual a sua experiência com SAP ABAP? Pode me dar um exemplo de um projeto que você trabalhou?
    Marco Antonio: Sou consultor ABAP com 5 anos de experiência, sendo que 4 foram em empresas multinacionais. Já trabalhei com vários projetos SAP: * Suporte AMS -> Resolução de problemas, melhorias em programas e processos já existentes e criação de novos programas e processos para atender o business. * Projeto NT(Nota Técnica) -> Organização de aplicação de Snotes a fim de atender os requisitos legais do governo. * Migração ECC para S/4HANA -> Responsabilidade pela adaptação de programas customizados, validação de tabelas de Tax e Pricing migradas, e execução de testes end-to-end. Colaboração próxima com equipes globais, garantindo alinhamento e sucesso nas entregas ao longo de todo o ciclo de migração. * John Deere Parts Support (MM e SD)-> Entrega de soluções de melhoria contínua, com redução de 60% na fila de incidentes por meio de análise de causa raiz (RCA), em colaboração com equipes de negócio e operações para garantir resoluções eficazes e sustentáveis. * Heineken Toll Voucher Automation -> Desenvolvimento de integrações com terceiros utilizando e consumindo serviços Odata, tratando cargas XML recebidas via Web Services de um sistema de Transportation Management e transmitindo os dados processados para plataformas externas. Desenvolvimento e suporte à implementação de processos automatizados de cálculo de pedágios e compras, com ganho de eficiência e desempenho operacional.
    Sophie: Como você identifica a causa-raiz de um problema tecnológico?
    Marco Antonio: Entre outros projetos
    Sophie: Como você identifica a causa-raiz de um problema tecnológico?
    Marco Antonio: Primeiro, entro em contato com quem está com o problema, seja pessoa de business ou da operação. Segundo, entendo como replicar o problema. Terceiro, realizo Debugging avançado enquanto reproduzo o problema para identificar quais as possíveis causas
    Sophie: Como você organiza seu tempo em projetos com prazos difíceis?
    Marco Antonio: Procuro organizar meu trabalho criando Cards com tarefas separadas, além de definir seus prazos e prioridades. Então sempre sei por onde começar, onde estou e para onde vou.
    Sophie: Você já trabalhou com equipe multidisciplinar? Como se comunica com profissionais de diferentes áreas?
    Marco Antonio: As equipes em que trablhei são as comuns no mundo SAP: Desenvolvedores e analistas. Me comunico com profissionais de diferentes áreas com curiosidade para entender o trabalho deles e procurando gaps em que eu como desenvolvedor possa melhorar algum processo para a pessoa
    Sophie: Você está disponível para começar imediatamente ou tem um prazo em mente?
    Marco Antonio: Estou disponível para começar imediatamente
    Sophie: Quais melhores práticas de ABAP você sempre tenta seguir em seu trabalho?
    Marco Antonio: - Desenvolvo com a sintaxe do ABAP 7.4 (Inline), o que a SAP garante que é mais performático. - Estou me aprimorando em CDS views para usá-las em ambientes S/4HANA, o que faz com que as buscas em tabelas (push-down) sejam muito mais rápidas, além de seguir os princípios do clean core. - Sigo o conceito OO, assim mantendo os objetos mais organizados. - Coloco comentários em todo o código, para facilitar o entendimento do que o código está fazendo - Quando finalizo o desenvolvimento de um programa, realizo o ATC para verificar se há algum warning ou erro, além de realizar também o teste de performance
    Sophie: Você tem experiência com rotinas de recebimento Fiscal e Financeiro? Pode me contar um pouco?
    Marco Antonio: Já trabalhei no módulo de FI, mas foi voltado a Warranty, então não tenho experiencia com recebimento fiscal e financeiro. Mas tenho vontade de aprender. Encaro qualquer novo desafio e sempre consigo chegar na solução
    Sophie: Pode me contar sobre uma situação em que teve que explicar um conceito técnico para alguém não-técnico?
    Marco Antonio: Sim, sou uma pessoa que gosta de transmitir conhecimento para as pessoas. E como sou muito paciente, isso ajuda bastante no aprendizado da outra pessoa. Já ensinei o básico deo ABAP para funcionais SAP( para que eles consigam debugar um programa) e para analista de dados( para que eles consigam começar a desenvolver em abap)
    Sophie: Você já trabalhou com integração de sistemas? Como foi essa experiência?
    Marco Antonio: Sim, em minha última experiencia na Heineken participei de um projeto para automação de compras de vale-pedágio para os motoristas (Projeto Toll Voucher) Nesse projeto, utilizei um webservice já existente para consumir dados vindos de um sistema chamado Shipsy, que é onde os transportadores cadastravam motorista, renavam, placa do caminhão etc. Com os dados recebidos, eu manipulava os dados no SAP e enviava para outro sistema terceiro via Odata Service. Esse sistema realiza a compra do vale pedágio de acordo com as informações requeridas por lei que são enviadas pelo SAP

3. Candidate Activity

CandidateCompany ReceivedReplied InterviewingCompletedResult
Lesli Areli Tirado RamirezMoonsYesNoNoNo
David Alberto Vique AlmeidaTalentlyYesNoNoNo
Vinicius TrindadeB4talentYesNoNoNo
AlefB4talentYesNoNoNo
Angie Xiomara Pérez MuñozTalentlyYesNoNoNo
JHON ORTIZ GOMEZTalentlyYesNoNoNo
AllanB4talentYesNoNoNo
Fabián Andrés Bocanegra CastilloTalentlyYesNoNoNo
Fabricio BencomoTalentlyYesNoNoNo
Geovanni MenaTalentlyYesNoNoNo
Hans Steven Vergara ChamorroTalentlyYesNoNoNo
Ender Eduardo Mendez BritoTalentlyYesNoNoNo
Juan Eber Taype EscobarTalentlyYesNoNoNo
Kevin Abel Brito DíazTalentlyYesYesNoYesstrong_fit
Joan RomeroTalentlyYesNoNoNo
Libardo Andres López CalaTalentlyYesNoNoNo
Matias Nicolas ArruaTalentlyYesNoNoNo
Matias OviedoTalentlyYesNoNoNo
Miguel RamosTalentlyYesNoNoNo
Robert José Landeo PortocarreroTalentlyYesNoNoNo
SARA CONCEPCIÓN ARMOATalentlyYesNoNoNo
LuizaB4talentYesNoNoNo
Vinicius FreitasB4talentYesNoNoNo
Jesus Miguel Agamez CaraballoTalentlyYesYesYesNo
Luis Fernando Sanchez AgostiniTalentlyYesYesYesNo
Andres Felipe Castro CardonaTalentlyYesNoNoNo
Simon ESCOBARTalentlyYesNoNoNo
till dietrichTalentlyYesNoNoNo
Werner Alberto Rodríguez RojasTalentlyYesNoNoNo
Zaghum AbbasTalentlyYesNoNoNo
MARCOS LEBRONTalentlyYesYesNoYesstrong_fit
Stalyn Alexander Nogales ShagñayTalentlyYesYesYesNo
Luis Roberto Arroyo BonifazTalentlyYesNoNoNo
Andres RochaTalentlyYesYesNoYes
Alex MontesTalentlyYesYesYesNo
GiuseppeMoons YesYesNoYesstrong_fit
Sergio MatamorosTalentlyYesYesNoYesstrong_fit
Cynthia Berenice Pelayo MorenoMoonsYesYesNoYesstrong_fit
Rene AriasTalentlyYesYesYesNo
Andres Felipe Diaz GonzalezTalentlyYesYesYesNo
Sebastián Josué Montero MadrigalTalentlyYesYesNoYesstrong_fit
José TorresTalentlyYesYesNoYesstrong_fit
pablo guzmanTalentlyYesYesNoYesstrong_fit
Johan Marcel Fernández CastilloTalentlyYesYesNoYesstrong_fit
Christian Sebastian Pinos MendezTalentlyYesYesYesNo
Eyder GarciaTalentlyYesYesNoYesmedium_fit
ANGEL VILLAVICENCIOTalentlyYesYesNoYesstrong_fit
Ronaldo Sergio Jara CondoriTalentlyYesYesNoYesstrong_fit
Jean Carlos Vega VidarteTalentlyYesYesNoYesstrong_fit
Carlos HernándezTalentlyYesYesNoYesstrong_fit
CandidateCompany ReceivedReplied InterviewingCompletedResult
Lesli Areli Tirado RamirezMoonsYesNoNoNo
David Alberto Vique AlmeidaTalentlyYesNoNoNo
MoonsYesNoNoNo
MoonsYesNoNoNo
Gilson BormannB4talentYesNoNoNo
Vinicius TrindadeB4talentYesNoNoNo
Ana PaulaB4talentYesNoNoNo
AlefB4talentYesNoNoNo
Angie Xiomara Pérez MuñozTalentlyYesNoNoNo
Matheus CostaB4talentYesNoNoNo
JHON ORTIZ GOMEZTalentlyYesNoNoNo
Johan MosqueraMoonsYesYesYesNo
AllanB4talentYesNoNoNo
Fabián Andrés Bocanegra CastilloTalentlyYesNoNoNo
Fabricio BencomoTalentlyYesNoNoNo
Geovanni MenaTalentlyYesNoNoNo
David Alejandro López MartínezMoonsYesYesNoYesstrong_fit
Berenice GómezMoonsYesYesNoYesstrong_fit
XIMENA SANCHEZ RIVERAMoonsYesYesYesNo
Carla SousaB4talentYesYesNoYesstrong_fit
Víctor TerrazasMoonsYesYesYesNo
Hans Steven Vergara ChamorroTalentlyYesNoNoNo
Ender Eduardo Mendez BritoTalentlyYesNoNoNo
Juan Eber Taype EscobarTalentlyYesNoNoNo
Abraham BerrocalMoonsYesNoNoNo
Kevin Abel Brito DíazTalentlyYesYesNoYesstrong_fit
Joan RomeroTalentlyYesNoNoNo
Marco Antonio Arroyo RomeroMoonsYesYesNoYesopted_out
Libardo Andres López CalaTalentlyYesNoNoNo
Matias Nicolas ArruaTalentlyYesNoNoNo
Matias OviedoTalentlyYesNoNoNo
Miguel RamosTalentlyYesNoNoNo
Emerson Kevin Martinez OrtizMoonsYesYesNoYesstrong_fit
Robert José Landeo PortocarreroTalentlyYesNoNoNo
SARA CONCEPCIÓN ARMOATalentlyYesNoNoNo
EduardoMoonsYesYesYesNo
Noe PereaMoonsYesYesNoYesstrong_fit
LuizaB4talentYesNoNoNo
Vinicius FreitasB4talentYesNoNoNo
Isra RendonMoonsYesYesNoYesstrong_fit
Marco AntonioB4talentYesYesNoYesstrong_fit
José Manuel Delgado GuirolaMoonsYesYesYesNo
Jesus Miguel Agamez CaraballoTalentlyYesYesYesNo
Luis Fernando Sanchez AgostiniTalentlyYesYesYesNo
Andres Felipe Castro CardonaTalentlyYesNoNoNo
Simon ESCOBARTalentlyYesNoNoNo
till dietrichTalentlyYesNoNoNo
Werner Alberto Rodríguez RojasTalentlyYesNoNoNo
Zaghum AbbasTalentlyYesNoNoNo
MARCOS LEBRONTalentlyYesYesNoYesstrong_fit
Stalyn Alexander Nogales ShagñayTalentlyYesYesYesNo
José Manuel Sánchez HernándezMoonsYesYesYesNo
Samuel JiménezMoonsYesYesNoYesstrong_fit
Miriam Jetzabel Espinoza MejiaMoonsYesYesNoYesopted_out
Efrain AyalaMoonsYesYesNoYesstrong_fit
Luis Roberto Arroyo BonifazTalentlyYesNoNoNo
Andres RochaTalentlyYesYesNoYes
Alex MontesTalentlyYesYesYesNo
GiuseppeMoons YesYesNoYesstrong_fit
Sergio MatamorosTalentlyYesYesNoYesstrong_fit
Cynthia Berenice Pelayo MorenoMoonsYesYesNoYesstrong_fit
Rene AriasTalentlyYesYesYesNo
Andres Felipe Diaz GonzalezTalentlyYesYesYesNo
Sebastián Josué Montero MadrigalTalentlyYesYesNoYesstrong_fit
José TorresTalentlyYesYesNoYesstrong_fit
pablo guzmanTalentlyYesYesNoYesstrong_fit
Johan Marcel Fernández CastilloTalentlyYesYesNoYesstrong_fit
Christian Sebastian Pinos MendezTalentlyYesYesYesNo
Eyder GarciaTalentlyYesYesNoYesmedium_fit
ANGEL VILLAVICENCIOTalentlyYesYesNoYesstrong_fit
Ronaldo Sergio Jara CondoriTalentlyYesYesNoYesstrong_fit
Jean Carlos Vega VidarteTalentlyYesYesNoYesstrong_fit
Carlos HernándezTalentlyYesYesNoYesstrong_fit