Session Summary:
Arvind Jain discusses enhancing engineering velocity and driving AI adoption across various functions within the company. He emphasizes CEOs breaking inertia to push AI adoption across the company. Topics include hiring AI engineers, current capabilities of large language models (LLMs), AI's evolving landscape, and how this will impact founders.
Arvind Jain’s Background:
- Founder and CEO of Glean (ChatGPT for your organization). Established in 2019
- Glean integrates with enterprise systems (e.g., Google Drive, Salesforce) to streamline knowledge access
- Co-Founder of Rubrik and former Distinguished Engineer at Google
AI’s Impact on Velocity
Centralized knowledge bases combined with internal search/LLMs increase velocity
- While at Rubrik, Arvind observed that companies slow down as teams scale and information silos emerge
- His hypothesis was that by aggregating knowledge - you could use search and LLMs to help teams move faster
- This has proven true - Glean is used company wide across EPD, GTM, and Support as well as during onboarding
Primary value across teams:
Finding the right knowledge
- Because Glean aggregates knowledge across systems, employees use it to ask questions and get context quickly
- This has been useful during onboarding - employees can access all company knowledge by asking questions
- Instead of waiting for their manager or bugging their peers, this lets internal teams gather context quickly
Finding the right people
- One of the big problems Arvind faced at Rubrik was that it became hard to know who internal experts are
- Employees then waste time asking the right questions to the wrong people - slowing the company down
- Glean looks at who are the most prolific authors on a subject and who is most active on relevant tools
- With this information, Glean points employees to the right person reducing time to the correct insight for teams
- Example: You have a question on AWS security. Glean points you to the most prolific author internally on AWS
- Without AI or an integrated system, this is nearly impossible to keep up to date especially in a fast growth startup
Arvind on Glean/Internal Search
“The product [Glean/Internal Search] is generally useful to every employee inside the company because all of us are humans who have questions. We're all knowledge workers [where] we tend to do some work and oftentimes it requires reading some information to do our work so, so it's a composite tool that gets company wide use”
Like Glean, Faire built tools internally to centralize knowledge and context and increase velocity
FaireyBot
- They leveraged GPT4 and built connections to core systems (Github, Zendesk, Jira, etc) to build a Slack bot
- System is only as good as the inputs - Faire has built a strong emphasis on clean documentation from day 1
- Slack bot helps internal employees gather context fast and limit the amount of back/forth between teams
- They found Slack was the best place for Fairey as its available to everyone and requires no behavior changes
- They also added their own filters on top based on rules/roles - For example, Fairey doesn’t disclose salaries
Example: How Faire uses Fairey Bot to enable context gathering
Question
How is commission calculated on brand orders?
Answer
The commission on brand orders is computed based on several factors:
- If it is a trade show invoice, the commission is 0%.
- If this is for a brand invoice and the retailer is still a guest, the commission is 0%.
- If the retailer invited the brand before it was activated, the commission is 0%.
- If the brand and retailer have an existing relationship, the commission is reducedCommissionBps (RCB)
- If the order is in a state where the brand gave only limited commission, the commission is RCB
- If it's the first order for the brand with first order commission, the commission is firstOrderCommissionBps.
- All other cases get the commissionBps, which is basically the default commission.
The actual computation of the commission is done using a few variables and formulas, such as payoutTotalCents, which is computed based on the total cost of items, shipping, and the commission and fee percentages; grossOrderItemsTotal, which is the total cost of items before discounts and fees are applied; and brandFundedDiscounts, which is the total amount of discounts funded by the brand. There are also several helper functions and classes involved in the calculation, such as PayoutCostSummaryCents, Commission, and MoneyChanger.
Code Reviews
- Very early, Faire invested in a “style guide”, documentation, and rules on how to write code and use features
- Based on this documentation, they built a tool to do AI automated code reviews to increase velocity/shipping
- With this, AI now completes 80% of code reviews in 2 minutes. Reviewer only checks code outside of guide
- Also, the same AI that does the code review also summarizes the changes (useful for submitter and reviewer)
- This reduces bottlenecks and limits the amount of time engineers have to wait for manual code reviews
Zapier is seeing widespread adoption of AI to power internal workflows
- Nearly 1/2 Zapier employees now use OpenAI in Zaps to power internal workflows. Hackathon helped drive adoption
Example: Leveraging LLMs to increase accuracy of Zapier’s Natural Language API (NLA)
- Zendesk → Zapier Tables → OpenAI summarizer → Slack, built by Reid
- NLA uses LLMs under the hood, with poor accuracy at launch
- Used OpenAI + Zaps for issue summarization (see above) to direct our CTO’s attention
- Impact: success% rate of NLA from 50% to nearly 80% in 3 months
Key characteristics to organizing data and knowledge in one place
Structure data and integrate systems to use AI more effectively
- Internal search and LLMs require structured data and system integration for optimal AI utilization.
- Glean solves this challenge by integrating with every internal system of record in a permissioned environment.
- Faire solves this challenge by using GPT4 + connections to core systems (Github, Zendesk, Jira, etc)
Importance of clean documentation
- The cleaner your underlying documentation, the more useful search will be.
- Glean and Faire invested from very early on in clean documentation across the business (not just in engineering).
- Early investment in comprehensive documentation across all functions supports successful AI tool deployment
Establishing communication norms
- Glean maintains a writing culture and encourages teams to avoid verbal communication
- There are established norms for meeting notes, meeting schedules, and Slack communication
- As an example, they ask that employees use public channels on Slack for questions
- These public facing questions/docs are fed into Glean and become context for future employees
- Initially, strict adherence to document structure was enforced, but it proved too complicated and unsustainable
- These public facing questions are fed into Glean and become context for future employees
- AI is particularly helpful in context as it helps decipher data from different documents internally
- Without AI, they wouldn’t be able to do this - they’d have to be much stricter on doc guidelines
- Additionally, Glean records video meetings and transcripts are fed into their knowledge base
- Like with Faire, this acts as semi-structured data that teams can use to ask questions and work faster
Create rituals/habits that consider usability by AI
- David, CEO of Glide (avra batch 2 alum), also noted they are enforcing new patterns of writing code that lend themselves to being completed by LLMs driving potential LT engineering velocity gains.
David from Glide on writing code that are more completable by LLMs
“LLM completion has created an increased design pressure on our internal APIs because you want to structure your code in a more declarative localized way where features can be outputted in a single file versus having to edit 3-4 different files…that’s generally considered good style of writing code…we are adopting patterns that are more completable by LLMs”
AI’s Impact on Engineering
Context Gathering and Troubleshooting
- Engineers utilize AI to gather project context, locate design documents, and understand code bases.
- Internal search tools save engineers 5-6 hours weekly by reducing time spent on context retrieval.
- For instance, Glean analyzes task histories, comments, and pull requests to expedite project initiation and troubleshooting.
- Fairey bot reduces the time it takes to gather context. Engineers use the Slack bot to ask questions about the code base and definitions of metrics. This is faster than waiting for a more tenured engineer to respond with an answer to a question
Code Assistant / Copilot
- Engineers at Glean employ Copilot primarily for autocomplete functionalities rather than full code generation.
- Full reliance on AI for coding hasn’t significantly improved velocity.
- Faire rolled out to the entire engineering and productivity team. Saw some gains, but not as good as Github claims
- Copilot saves time on boilerplate code, new work, and is great for long/complex output parsing
- Copilot not as useful for SQL/database and for working on existing code
- At Faire, PRs per active contributor went up 20%-30%. Some is driven by Copilot, some is driven by team becoming more tenured
Unit Tests
- AI automates unit testing, a time-consuming process comparable to actual code writing.
- Previously, testing at Rubrik could take a day following a 5-minute coding task due to rigorous acceptance criteria.
- Glean uses GPT-4 directly; engineers input code into ChatGPT to generate unit tests, streamlining deployment testing despite some operational clunkiness.
Code Reviews
- 75%-80% of code is reviewed in 2 minutes or less. Reviewers only have to review code outside of style guide
- Reduces bottlenecks as engineers do not have to spend time waiting for a reviewer to approve their code
Cultural Impact
- Outside of AI, one of Glean’s cultural values is serving others and then yourself.
- This trickles into the code review process. If an engineer has to decide between writing their own code or someone else’s, they should complete their counterparts code. This saves time by freeing up another engineer opposed to making them wait.
AI’s Impact on Support
- Arvind sees support as one of the biggest use cases for AI. The challenge is that data is often siloed across different products. Information can be held in knowledge articles, in Slack, in JIRA, or in other channels. Most tools in the support space don’t connect everything, so they are not that useful. Glean solves this by aggregating support information into a single place
- Today, AI is most useful for high volume low complexity tickets. There is typically a knowledge base associated with the issue, and these can be heavily automated. T-Mobile, a Glean customer, saw a 45% improvement in their case resolution times. Support agent use Glean as a sidekick to review relevant documents and answers while they are talking with customers
Arvind on why most AI support tools don’t work
“The standard issue a lot of people have run into is [where is the resolution located]?. Sometimes there's a new thing, but maybe like a few other customers have run into it. So there may be a similar case, you know, from another customer that may have some hints. Sometimes it's not resolved yet, but your teams are now discussing this issue in Slack. So like when a new support agent also sees an issue, they could say they don’t need to devote the time and energy right now as it's already been discussed in Slack. Sometimes it could actually be a known engineering issue in JIRA. There might be a JIRA ticket on it. So basically, the point I'm making is that knowledge is sort of everywhere. Zendesk or Intercom are sort of restricted in that sense, because they're not connected to everything.”
Matt Botvinick on why he agrees but with a caveat
“Agree – but I am not sure whether special-purpose tools will be necessary in the long term or whether general purpose AI systems will be able to handle the job with appropriate prompting and tool-use. We are not there yet, though”
Zapier has improved CSM producitvity and improved support planning with AI
Example: Re-allocate CSM to identify upsells in support inbox
- Zendesk → OpenAI triage/categorization → Slack
- Customer success managers spend 30 min/rep/week reviewing support tickets for their accounts to identify upsell opportunities. AI now passively routing.
- Impact: +1% incremental ARR/mo (+$3k ARR/mo)
Example: Positive impact on support workforce planning
- Zendesk → OpenAI ticket summarizer (internal tool) → Add Zendesk note
- Support reps use OpenAI to summarize, Q&A, and draft replies to ~10% of all weekly tickets decreasing handle time and increasing quality of response
- Impact: AI tool use going to be expected in rep performance management
AI’s Impact on Go-To-Market
- Arvind noted that AI has impacted Glean mainly in customer facing capacities when a salesperson needs to use the product to contextualize themselves quickly. Glean has yet to automate the process of outreach or candidate prospecting.
Prospecting
- There are tools to automate (e.g., Outreach, Clay), but AI is not impacting this job at Glean yet
Meeting Preparation
- Glean (or any centralized search platform) is incredibly useful when preparing for meetings. AEs can pull up relevant data on companies such as latest status, action items, and key area of customer focus. This helps AEs be more effective on the job
Closing the Deal
- Salespeople, like support agents, use Glean to answer customer questions in real time during the pitch
- Post-meeting, Glean prepares follow up emails and captures meeting takeaways in summarized documents
Arvind on how AI has impacted GTM
“The seller has 3 key activities. There is prospecting and actually finding people to sell to. You are going to have a meeting, preparing for it, and running it. Then having a follow up meeting. We are not doing a whole lot of prospecting with Glean. Once you actually have a meeting scheduled, Glean is actually very helpful, it helps you prepare for the meeting. For any new meeting that is coming up, you can open up a document and Glean will tell you about the customer, whats that latest status, what are the action items, what are the key areas of focus. It will give you a 360 view of the customer. During the meeting, the customer asks you all sorts of questions. The sales people will demo the tool and put those questions into Glean for the customers, and kill two birds with one stone. It shows that Glean will answer all of those technical questions. Glean can also prepare the follow up.”
Zapier has increased high-touch conversion and SDR efficiency using AI
Example: Zapier increased high touch conversion by 5% and capacity of reps/month by 10 opportunities
- Gong → OpenAI feature extraction → HubSpot, built by Dyan
- AI extracts information from Gong calls, summarizies information/next steps, and updates Hubspot with record details
- Enables sales exec reps to focus more on higher-quality, higher-ACV deals in HubSpot with zero rep/admin overhead
- Impact: 4-5% increase in conversion rate (+$50k ARR/mo)
- Impact: Reps handle additional 10 opportunities/rep/month (+$40k ARR/mo)
Example: Zapier increased meeting book rate by 40 basis points
- HubSpot → OpenAI personalized use case gen → Human review/send (zap), build by Beth
- Automation is hard to understand, personalized use case recommendations based on apps convert better. Example outbound email.
- Impact: "best sales email I've ever received". Increase 0.8% to 1.2% meeting book rate on outbound emails.
Role of the Founders: Breaking Interia and driving adoption of AI internally
- While the promise of AI is clear, Arvind has to remind the team at Glean to actively seek out AI tools. Despite being a deeply AI-focused company, employees don’t proactively approach their IT team to request new tools.
- This was surprising to Arvind because customers report the exact opposite experience. He started to build reminders into public forums and his leadership meetings to solve this issue. At the end of the day, you need to break habits and this will take repetition
Arvind on adoption of AI a Glean
“Nobody is confused about the promise of AI but I have to tell everybody in our company to go and look for tools out there and embrace AI technology. They are not coming to us with an AI tool and saying I want to buy this AI tool. We have also seen it the other way around, where our customers tell us they are getting bombarded by hundreds of AI tools and their team members are requesting it a lot”
Tactic: Talk about using AI tools publicly to emphasize its importance at the company
- At all hands, Arvind encourages everyone to identify one or two manual processes that can be optimized with AI
- He asked teams to focus on use cases versus tools to buy - the leadership team will help with selection/buying
- Framing the question as “what is repetitive and that you wish someone else could do?” helps drive good results
- This helped surface a few specific AI tools they are going to try internally to improve the velocity of people’s work
Leaders: Screen for candidates excited about AI that are already implementing AI into their function
- AI will impact every since function over time - you need to hire leaders that understand this and are on top of it
- In his experience, most leaders use the tools they know because that is what they have grown up with over time
- Arvind screens for leaders who are already implementing AI in their functions and are excited about AI’s potential
- In every interview, he asks 1) how have they implemented AI into their work and 2) the tools they favor. With just these two insights, you can easily pick up on whether they embrace new technologies and will push the boundary as AI advances
Arvind on non-technical leaders
“For the support leader, for leaders in general, we want them to have a mindset that I'm going to build the most modern function. I’m evaluating them and asking them to tell me how AI is going to change how you work? What are the tools that you actually like? It becomes clear who is trying to be proactive and trying to learn versus not. This is actually really good and it should become part of your interview criteria.”
- Marcelo (co-founder) wanted Faire to become early adopters of AI, but didn’t want unregulated exploration. He also knew that if they pushed AI top down, they wouldn’t see internal excitement and organic adoption. To solve this, Faire:
1. Established an AI vision to help employees understand the role of AI
- There was a lot of anxiety initially that AI would be used to replace people at Faire
- Marcelo and team published their AI vision to assuage these concerns and establish the purpose of AI
- The purpose of AI at Faire: Enabler of people, not replacer. Ideally, AI helps the company get twice as much done
2. Invested in an AI foundations team to create tooling for internal teams to adopt AI
- Faire set up a small team of 6 of their best people. The focus of this team is technical enablement
- The AI foundations team objective is to make technology available and to centralize AI knowledge at Faire
- This team builds “bridges” to the external world (e.g., connections to APIs, LLMS, and other 3rd party tools)
- If you want to do something with AI, people know there is a central team with knowledge and tooling support
- Eventually, they want to have efficient tooling to enable teams to finetune and serve LLMs/models at Faire
AI Foundations Pod's Mission
“Build expertise & enable Faire to effectively explore AI opportunities, empower product teams to rapidly and safely deliver robust, low-latency features leveraging AI”
3. Established AI Captains within teams to come up with ideas to potentially include in the roadmap
- Within each functional team, they established AI Captains. At the company, there are 20-30 people in this role
- AI Captains are functional leaders who have interest in AI and are typically direct reports of department heads
- The AI Captain’s job is to come up with ideas to improve productivity at Faire leveraging AI in the short-term
- The foundation pod/leadership team review these ideas and decide what to include in the roadmap or not
4. Drive bottoms up interest via AI Hackathon(s) and AI Tech Talks
- Initially, Faire held AI tech talks with researchers and experts to build interest and understanding at the company
- After the AI foundations team built basic foundations/tooling, Faire planned a 3 day Hackathon across 27 teams
- Going in, they told the teams the goal of the Hackathon was to 1) learn and explore, 2) see what AI can do, and 3) see what AI cannot do. The learnings from the Hackathon would become key inputs in planning. The realized:
- 1. Huge opportunity in search & discovery
- 2. Tremendous potential in internal productivity in engineering and data organizations
- 3. Low hanging fruit in customer support, but difficult to solve well
- 4. AI-content generation can create better product listing and campaigns for the marketplace
- The Hackathon helped drive interest at Faire - and also encouraged interaction with the AI foundations team
Launching AI native products or features
Below is Arvind’s iterative framework for launching AI products. He argues that companies should focus on quality and accuracy first, followed by speed, and then cost. This optimizes for driving customer product-market fit before making customer facing trade offs.
Step 1: Prior to product market fit, focus on quality and correct answers
- Companies should ignore costs for the first year and make sure the product works and the AI is performing well
- Since startups naturally don’t have much usage, cost should not be very significant early on
- Do not make tradeoffs on latency, the system’s answers have to be correct even if it takes a bit longer early on
Step 2: Once the product works and product market fit is achieved, focus on making it fast
- Once you are happy with quality and usage, focus on making your experience fast
- Fastness drives user experience and has a second order benefit of making your product cheaper to operate
- Speed leads to lower cost because faster performance requires smaller models, which have less inference costs
- In other words, making it fast = smaller models = cheaper to operate
Step 3: Optimize for cost
- Optimize for cost after achieving speed and quality, by refining your approach and model composition.
Arvind on launching AI products
“Our mental model was to ignore cost for the first year. First make the product work before you think about cost. At Glean we told people not to make tradeoffs on costs or latency to start with. Once you have a good product, then you can go in that order and make it fast and that will actually make it cheaper. This is because making it fast requires you to work with smaller models which have a lower inference cost”
Hiring engineers to build AI products
Engineering ICs: Target backend engineers with an analytical mindset and willingness to learn
- Chasing LLM or AI engineers isn't very useful for young companies especially for products like frontline LLM expertise. These specialists are expensive and in high demand, leading to salary premiums (sometimes >5x) and creating a two-tier compensation model within your company. Philosophically, Arvind believes AI engineers shouldn’t be paid multiples of a strong systems engineer
- There are very few true AI experts, and the technology is rapidly evolving. Backend engineers with an analytical mindset and a willingness to learn continuously are more valuable than LLM or foundation model engineers
- The reality is there are very few true AI experts today because the technology is changing fast. It is better to hire backend engineers with an analytical mindset, a willingness to learn, and an interest in AI. This is what Glean optimizes for despite being an LLM company.
- There is a strong open source / open domain community with tools, techniques, and documentation that can help engineers to learn best practices in AI. Given the rate of change in the technology, willingness to learn is one of the most important traits to look for when hiring for AI teams
- While less relevant for Glean, experts at Google Deepmind feel deep learning is not like traditional coding – through experience, one gets a ‘feel’ for how deep learning systems behave, which is quite important to get them to work well. This still does not need high-level technical LLM expertise, but it does require many hours logged with LLM work
Arvind on hiring LLM engineers
“From my perspective nobody is an expert, this technology is changing so fast that you have to constantly learn. You have to figure out new things. You don’t need someone who has played with LLMs before but someone who is a strong backend engineer with an analytical mindset and someone who is willing to read and learn. That’s what I would do, forget about the AI engineers and hire backend engineers.”
Technical Leaders: Target candidates that have worked on areas with similarities to AI like search
- When hiring leaders, Glean prioritizes those who have worked on technologies like search ranking, search, and ML systems rather than focusing solely on LLM experience. These areas require a different mindset to traditional engineering and closely align with the work you do in AI. As an example, search ranking work requires deep work in data to extract patterns just like in AI.
Open Source vs. Proprietary Models
Today, Arvind estimates that 90%+ of workloads run on closed models like GPT-4 or Claude Opus today. In the future, Arvind’s view is that 80% of workloads will be run on open source / open domain models. Two key reasons:
- Significant momentum in open source models from companies like Meta and Mistral. The big 5 (OpenAI, AWS, Anthropic, GCP, and Azure) will create general models that eventually converge in performance. At this point, open models like Llama 3 will be faster and cheaper to use. Having said that, the regulatory environment is likely to have a massive impact on the growth/viability of open-source. California’s 1047 (if it goes forward) could severely limit this area
- The majority of use cases (80%+) will be domain specific and require fine tuning where open models win
Arvind on Open Source
“As you see Llama 3, and you can see it's good that companies like Meta are putting models in the open domain. I think we will have really good models in the open domain and most problems can be solved using them. They will be cheaper, faster, and can be fine tuned, and for your specific use cases. Most AI applications will be very specific and will have one thing they do well. That bodes well for open domain models.”
However other experts would say this shift is not as clear cut and will come down to use case and cost long-term. General reasoning tasks lend themselves to the closed foundational models today. On the other hand, specific use cases requiring models to be good in one area lend themselves to open models. Some experts even suggest that using prompt engineering on closed models with long enough context windows could outperform fine tuned open models in the future. This approach is expensive right now, but could become more cost effective as models continue to evolve presenting a risk to some open source use cases.
Comments