the composable layer
when we started wiring smart surfaces to GraphQL at Upwork, i thought it'd be pretty mechanical. you have a component, it needs data, you write a query. done. but once you have 20+ surfaces each talking to a domain gateway that stitches together identity, catalog, messaging, and half a dozen other schemas — patterns start to emerge whether you plan for them or not.
the first thing we landed on was a composable-per-query convention. every GraphQL query gets its own composable that handles loading state, error normalization, and computed accessors for the bits of data you actually care about:
// composables/useJobDetails.ts
export function useJobDetails(jobId: string) {
const { data, loading, error } = useQuery(JOB_DETAILS_QUERY, {
variables: { id: jobId },
})
const job = computed(() => data.value?.job)
const proposals = computed(() => data.value?.job?.proposals ?? [])
return { job, proposals, loading, error }
}this looks simple but the key insight is that the component never touches the raw GraphQL response. it gets job and proposals as computed refs. if the schema changes — say proposals move to a separate resolver — you update the composable and nothing downstream breaks. we learned this the hard way after a schema migration rippled through 14 components in a single surface.
the domain gateway
the architecture at Upwork is that each business domain owns its own GraphQL schema. identity handles users, auth, profiles. catalog handles jobs, skills, categories. messaging handles threads, notifications. there's a gateway that stitches these together so from the client's perspective it's one schema.
this is great until you need data that spans domains. a job detail page needs the job (catalog), the client who posted it (identity), and the proposal thread (messaging). one query, three domains:
query JobDetails($id: ID!) {
job(id: $id) {
title
budget
client {
name
rating
}
proposalThread {
messageCount
lastActivity
}
}
}the gateway resolves each field to the right domain service. from the Vue side we don't care about any of that — we just see one response. but error handling gets interesting because a failure in the messaging service shouldn't prevent you from seeing the job details.
error handling that actually helps
this was probably the most annoying part to get right. GraphQL errors are not like REST errors. a 200 response can contain partial data AND errors simultaneously. we ended up with three error categories and a utility that classifies them:
// utils/classifyError.ts
type ErrorKind = "network" | "graphql" | "auth"
export function classifyError(error: unknown): ErrorKind {
if (error instanceof ApolloError) {
if (error.networkError) return "network"
const codes = error.graphQLErrors.map((e) => e.extensions?.code)
if (codes.includes("UNAUTHENTICATED")) return "auth"
return "graphql"
}
return "network"
}network errors mean something's down — show a retry button. auth errors mean the session expired — redirect to login. graphql errors are usually bugs in our queries or unexpected null fields — log them and show a graceful fallback. each surface has an error boundary component that switches on this classification.
pinia as the shared layer
the other pattern that emerged was using Pinia stores when multiple components in a surface need the same data. without this you get the mount storm problem — three components mount, three queries fire, the gateway does triple the work.
// stores/jobStore.ts
export const useJobStore = defineStore("job", () => {
const jobs = ref<Map<string, Job>>(new Map())
const pending = ref<Set<string>>(new Set())
function hasJob(id: string) {
return jobs.value.has(id)
}
function setJob(id: string, job: Job) {
jobs.value.set(id, job)
pending.value.delete(id)
}
return { jobs, pending, hasJob, setJob }
})the composable checks the store before firing a query. if the data's already there, skip the network call. if another component already has a request in flight (tracked via pending), wait for that instead of duplicating it. this cut our gateway traffic by something like 40% on the heavier surfaces.
what i'd do differently
honestly the composable-per-query pattern has held up well. the thing i'd change is that we started with too-granular stores. we had a store per entity type — jobs, users, proposals, contracts — and they'd get tangled up with cross-references. now i'd scope stores to surfaces instead. each surface gets one store that holds whatever that surface needs. simpler mental model, easier to reason about cache invalidation, and when a surface unmounts you can just blow away its store without worrying about orphaned refs.
the domain gateway pattern is genuinely good though. it lets each team own their schema without frontend engineers needing to know which service owns what. you just write the query shape you want and the gateway figures it out. that's the dream, and most of the time it actually works.