How to build a CSV export system with Laravel

By Dan Harrin

Exporting data from a web application is a common requirement. It's often necessary to provide users with a way to download data from the application, so they can use it in other applications, or to provide a backup of their data. However, exporting data can be a slow and memory-intensive process, especially when dealing with large datasets. In this article, we will explore how to build an efficient export system in a Laravel application, which can handle large datasets without running into memory issues, and can be queued to run in the background.

The article is comprehensive. It will start by building a simple system, and make many small changes to refactor it, alongside the reasoning behind those changes. This is to help you understand the process of building an export system, and the problems you may encounter along the way. It is a window into the investigative process of building a feature, and not a quick tutorial. If you're looking for the final code or information about a pre-built export system, check out the conclusion.

The baseline application

The idea behind this export system example is to allow the user to export a list of "users" that belong to a "team". The table contains the columns name, email and role. name and email are columns on the User model, whereas the role is a pivot column that lives on the intermediate team_user table. This pivot table is used by the BelongsToMany relationship between the Team and User models.

The app uses Livewire, since it makes it straightforward to hook up an interactive interface to the export system, but this guide applies to any Laravel application. We will also be using the Filament Notifications package, which is built for Livewire to provide an easy way to serve flash (session) and database notifications to the user. You do not need to use either package and can instead use Laravel’s built-in notifications instead if you wish.

The UI from which the export is being triggered contains a table of users, with a search input to filter the users by name or email address. The table also contains a button to trigger the export. When the export is triggered, the user should receive a notification that the export is in progress, and then another notification when the export is complete, with a link to download the CSV file. When searching for users, the export should only contain the users that match the search query. There is a button in the interface to open a modal containing the database notifications for the user.

If you wish to follow along, this article expects you to have a good understanding of PHP, Laravel, and especially the Laravel queue system. Your app's environment should be set up with the following:

  • A private S3 bucket configured for storage.

  • A Redis-based queue, preferably also using Laravel Horizon for monitoring and debugging.

  • The job_batches table in the database.

You can find the baseline UI for the application in the starting-point branch of the GitHub repository.

Synchronous exports

In PHP, the league/csv package is standard for reading and writing CSV files. To install it:

composer require league/csv:^9.0

To start with, we will create a CSV file and stream it to the user immediately as a file download.

Since I am using Livewire, I will create a new export() method on the Livewire component, which will handle the export. However, if you're using a traditional POST request with a controller, this will work similarly.

We need to start by getting all the data we want to put in the CSV. Let's save a $header array containing the names of each column for the first row:

$header = ['name', 'email', 'role'];

Now, we will get the data from the database. Like I mentioned previously, my Livewire component supports searching for users by their name or email address. We want to take the search results into account when exporting the data shown in the table, so we can reuse the getQuery() method I have on my components. Since we’re exporting users and their roles in a specific team, we want the BelongsToMany scoped relationship instance. (You can also get a standard Builder from getQuery() if you don’t need any relationship data.)

We need to run the query to get the results, and then map each of those Eloquent records into an array, containing the values of the columns we want to export. We can use the map() method on the Eloquent collection to do this:

use App\Models\User;
 
$records = $this->getQuery()->get()->map(fn (User $user): array => [
$user->name,
$user->email,
$user->pivot->role,
])->all();

Each array in the $records array will represent a row in the CSV file, and each value in the array will represent a cell in the CSV file. The first value in each array will be the first column in the CSV file, the second value will be the second column, and so on, so they correspond to the values in the $header array.

Now we have the data we want to export, we can create a new CSV Writer instance using the createFromString() method, and then use the insertOne() method to insert the header row, and the insertAll() method to insert all the record rows:

use League\Csv\Writer;
 
$csv = Writer::createFromString();
$csv->insertOne($header);
$csv->insertAll($records);

Finally, we can stream the CSV to the user as a file download using the streamDownload() method on the response() helper:

return response()->streamDownload(
fn () => print($csv->toString()),
'users.csv',
['Content-Type' => 'text/csv'],
);

Since I am using Livewire and want to provide some immediate feedback to the user, I am also going to use a Filament flash notification to alert the user that their export is completed and is downloading, but this is optional:

use Filament\Notifications\Notification;
 
Notification::make()
->title('Export completed')
->body('Downloading...')
->info()
->send();

Queued exports

While this system works well for small datasets, it will not perform the best for exporting large datasets. Also, it would be a good idea if we could abstract the export process so that other datasets can be exported in other parts of the application, without duplicating code.

The next iteration of the export system will be a queued job, which writes the CSV file into the filesystem (S3), and then sends a database notification to the user with a link to download the file. The S3 file will be private, so we will generate a temporary signed URL using S3 to allow the user to download the file for 24 hours, after which it will expire.

First, we will create a new job using the make:job Artisan command:

php artisan make:job ExportJob

This job will be able to handle any type of export, so we need a few things:

  • The model that we want to export

  • The header of the CSV file

  • Which database rows (records) to export

  • A function to transform the database rows into CSV rows

  • The current user, so we can send them a notification

We will pass all of this information to the job as constructor arguments, using property promotion:

use App\Models\User;
use Closure;
 
public function __construct(
protected string $model,
protected array $header,
protected array $records,
protected Closure $mapper,
protected User $user,
) {}

Now, we can dispatch the job from the Livewire component, passing the information we need to the job:

use App\Jobs\ExportJob;
use App\Models\User;
 
dispatch(new ExportJob(
model: User::class,
header: ['name', 'email', 'role'],
records: $this->getQuery()->pluck('users.id')->all(),
mapper: fn (User $user): array => [
$user->name,
$user->email,
$user->pivot?->role,
],
user: auth()->user(),
));

The main new part of this code is the records array, which contains the IDs of the records we want to export. We are using the pluck() method to get an array of user IDs from the query, which we will pass to the job. The job will then use these IDs to retrieve the records from the database. This is more efficient than retrieving all the model instances in a collection and passing them to the job, as it reduces the amount of data in memory.

However, when we dispatch the job, we get an error, since Closure objects are not serializable. Closure objects represent functions in PHP, in this case the mapper function. Serialization is the process of converting data (usually objects) into strings, so they can be sent to Redis for temporary storage before the queued job gets processed. We can't serialize a Closure object, so we need to find a way to pass the function to the job in a different way.

Luckily, Laravel has a solution—their Serializable Closure package is able to wrap a function and make it serializable. We can install it using Composer:

composer require laravel/serializable-closure

It's simple to be able to serialize the mapper function using the package, by wrapping it in a SerializableClosure instance in the constructor, and storing that in a property instead of the original function:

use Closure;
use Laravel\SerializableClosure\SerializableClosure;
 
protected SerializableClosure $mapper;
 
public function __construct(
protected string $model,
protected array $header,
protected array $records,
Closure $mapper,
protected User $user,
) {
$this->mapper = new SerializableClosure($mapper);
}

Now, in the handle() method of the job, we can move the CSV generation code from the Livewire component to the job:

use League\Csv\Writer;
 
$csv = Writer::createFromString();
$csv->insertOne($this->header);
$csv->insertAll(
$this->model::find($this->records)
->map($this->mapper->getClosure())
->all(),
);

We pass the CSV data to the insertAll() function. Let's explore that process in more detail:

  • $this->model contains the fully qualified class name of the model we want to export

  • Since find() is a static method available on model classes, we can call that directly on the model class name. If it was an instance method, we'd have to instantiate the model first. Usually, developers only pass one ID to the find() method of a model or query, to fetch a singular model instance. But did you know, you can instead pass an array of IDs, which will return a collection of all the corresponding model instances?

  • From the collection of model instances, we can then map them to the CSV rows using the mapper function. We use the getClosure() method on the mapper property to get the original function back from the SerializableClosure instance, and then call it using the map() method on the collection.

  • Finally, we call the all() method on the collection to get an array of the CSV rows, which we pass to the insertAll() method of the CSV writer.

Now, let's store the CSV file privately in an S3 bucket. We need to generate a unique file name for this export, so we should use a random string as part of the file name. We can also add the name of the model we are exporting to the file name for aesthetics:

use Illuminate\Contracts\Filesystem\Filesystem;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
 
$fileName = (string) str($this->model)
->classBasename()
->plural()
->snake()
->append('-')
->append(Str::random())
->append('.csv');
 
Storage::put("exports/{$fileName}", $csv->toString(), options: Filesystem::VISIBILITY_PRIVATE);

Finally, let's generate a temporary signed URL for the file, and notify the user. I am using Filament database notifications to send the notification, but you can use Laravel's built-in notifications if you prefer:

use Filament\Notifications\Actions\Action;
use Filament\Notifications\Notification;
 
$csvUrl = Storage::temporaryUrl("exports/{$fileName}", now()->addDay());
 
Notification::make()
->title('Export completed')
->actions([
Action::make('download')->markAsRead()->url($csvUrl),
])
->success()
->sendToDatabase($this->user);

Serializing the query

There is a bug in our queued implementation. You may have noticed that when we started queueing the export, we switched from fetching the user's role with $user->pivot->role to fetching it with $user->pivot?->role. This is because the pivot property is not available on the model instances when they are retrieved from the database in the queued job. The pivot property is added to the model instances by Laravel when they are retrieved from a relationship, and this does not happen when the model instances are serialized and unserialized by the queue.

The flaw that needs fixing is with the array of record IDs we are passing to the job. The original query used to fetch the records from the database has been lost, and that was the easiest way that we can fetch the pivot data for the records. We need to find a way to pass the original query to the job, so that it can be executed again inside the job.

Laravel uses query builder objects to represent database queries, and these objects are not really serializable out of the box. However, there is a package you can use called Eloquent Serialize, which can serialize and unserialize Eloquent query builder objects, allowing us to pass them to queued jobs.

We can install the Eloquent Serialize package:

composer require anourvalar/eloquent-serialize

Let's pass the query into the job instead of the model, since the query builder object contains the name of the model it is querying anyway:

use App\Jobs\ExportJob;
use App\Models\User;
 
dispatch(new ExportJob(
query: $this->getQuery(),
header: ['name', 'email', 'role'],
records: $this->getQuery()->pluck('users.id')->all(),
mapper: fn (User $user): array => [
$user->name,
$user->email,
$user->pivot?->role,
],
user: auth()->user(),
));

And in the constructor of the job, serialize the query before storing the serialized query string in a property:

use AnourValar\EloquentSerialize\Facades\EloquentSerializeFacade;
use App\Models\User;
use Closure;
use Illuminate\Database\Eloquent\Builder;
use Laravel\SerializableClosure\SerializableClosure;
 
protected string $query;
 
protected SerializableClosure:: $mapper;
 
public function __construct(
Builder $query,
protected array $header,
protected array $records,
Closure $mapper,
protected User $user,
) {
$this->query = EloquentSerializeFacade::serialize($query);
$this->mapper = new SerializableClosure($mapper);
}

However, now when dispatching the job, we get an error, because in my case the getQuery() method returns a BelongsToMany relationship, and the BelongsToMany relationship is not a query builder that is serializable by the Eloquent Serialize package. We need to convert the relationship to a query builder before serializing it:

query: $this->getQuery()->getQuery(),

There is a difference in the way pivot data is returned when querying a query builder instead of a relationship. Since the query builder has no knowledge of the relationship, it does not serialize a pivot model for each model instance. The pivot data is joined to each model instance, so is accessible as any other property on the model, instead of being nested inside a pivot relation. We need to update the mapper function to reflect this:

mapper: fn (User $user): array => [
$user->name,
$user->email,
$user->role,
],

Finally, we need to adjust the handle() method of the job to unserialize the query and use it to fetch the records from the database instead of the model class name. We also need to refactor the file name generation to fetch the model from the query builder. It now looks like this:

use AnourValar\EloquentSerialize\Facades\EloquentSerializeFacade;
use Filament\Notifications\Actions\Action;
use Filament\Notifications\Notification;
use Illuminate\Contracts\Filesystem\Filesystem;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
use League\Csv\Writer;
 
$query = EloquentSerializeFacade::unserialize($this->query);
 
$csv = Writer::createFromString();
$csv->insertOne($this->header);
$csv->insertAll($query->find($this->records)->map($this->mapper->getClosure())->all());
 
$fileName = (string) str($query->getModel()::class)
->classBasename()
->plural()
->snake()
->append('-')
->append(Str::random())
->append('.csv');
 
Storage::put("exports/{$fileName}", $csv->toString(), options: Filesystem::VISIBILITY_PRIVATE);
 
$csvUrl = Storage::temporaryUrl("exports/{$fileName}", now()->addDay());
 
Notification::make()
->title('Export completed')
->actions([
Action::make('download')->markAsRead()->url($csvUrl),
])
->success()
->sendToDatabase($this->user);

Reducing the queueing delay

Currently, we are executing a query to fetch all the IDs of the records we want to export, and then passing those IDs to the job. The user is having to wait for this query to finish before the job is dispatched, and they receive the first notification. This was originally introduced before we started passing the query to the job, because if the query was filtered by the user (they searched for users by name or email address), we would export users outside the dataset. However, now that we are passing the query to the job, we can dispatch the job immediately, and then execute the query to fetch the user model instances inside the job:

use App\Jobs\ExportJob;
use App\Models\User;
 
dispatch(new ExportJob(
query: $this->getQuery()->getQuery(),
header: ['name', 'email', 'role'],
mapper: fn (User $user): array => [
$user->name,
$user->email,
$user->role,
],
user: auth()->user(),
));

We can remove the records argument from the constructor of the job:

use AnourValar\EloquentSerialize\Facades\EloquentSerializeFacade;
use App\Models\User;
use Closure;
use Illuminate\Database\Eloquent\Builder;
use Laravel\SerializableClosure\SerializableClosure;
 
public function __construct(
Builder $query,
protected array $header,
Closure $mapper,
protected User $user,
) {
$this->query = EloquentSerializeFacade::serialize($query);
$this->mapper = new SerializableClosure($mapper);
}

And finally, replace the find() method with get() in the handle() method of the job, to fetch all records from the query:

$csv->insertAll($query->get()->map($this->mapper->getClosure())->all());

Chunking the query

Currently, the get() is fetching all the records from the database in one go, and hydrating each row into a model instance. With large datasets, this query could be slow due to the amount of data being fetched and memory inefficient due to the number of model instances that need to be stored in the collection at once. We can improve this slightly by using the chunkById() method to fetch the records in smaller chunks, and then map the model instances into arrays from there. This will reduce the amount of memory used by model instances at once:

use Illuminate\Database\Eloquent\Collection;
 
$recordRows = [];
$mapper = $this->mapper->getClosure();
 
$query->chunkById(
100,
function (Collection $records) use ($mapper, &$recordRows) {
$recordRows = [
...$recordRows,
...$records->map($mapper)->all(),
];
},
column: $query->getModel()->getQualifiedKeyName(),
alias: $query->getModel()->getKeyName()
);
 
$csv->insertAll($recordRows);

Queued jobs have a maximum execution time, and if a job takes longer than this time to execute, it will crash. Also, if there is a problem with part of the dataset and the job fails, the entire dataset needs to be exported again. Using batched jobs can solve both of these problems. We can dispatch a job to chunk the query and dispatch multiple jobs to export the chunks, and then dispatch a job to send the notification when all the chunks have been exported. We can asynchronously add chunks to the batch so that they can be processed immediately, even if the entire batch has not been prepared yet, which will provide a further performance improvement.

The way to set this up is to create a job chain containing the batch followed by the job to send the notification. The batch will be created with just one job, which will chunk the query and add the chunks to the batch.

Let's create a new job to prepare the export batch, a job to export each chunk, and a job to send the notification when the export is complete:

php artisan make:job PrepareExportBatchJob
php artisan make:job ExportChunkJob
php artisan make:job SendExportNotificationJob

In the original ExportJob, we should set up the CSV file with the correct headers, since this should only happen once per export, and the rest of the jobs require the file to exist:

use AnourValar\EloquentSerialize\Facades\EloquentSerializeFacade;
use Illuminate\Contracts\Filesystem\Filesystem;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
use League\Csv\Writer;
 
$query = EloquentSerializeFacade::unserialize($this->query);
 
$csv = Writer::createFromString();
$csv->insertOne($this->header);
 
$fileName = (string) str($query->getModel()::class)
->classBasename()
->plural()
->snake()
->append('-')
->append(Str::random())
->append('.csv');
 
Storage::put("exports/{$fileName}", $csv->toString(), options: Filesystem::VISIBILITY_PRIVATE);

Now, we should dispatch the job chain, containing the export batch as well as the notification job:

use App\Jobs\PrepareExportBatchJob;
use App\Jobs\SendExportNotificationJob;
use Illuminate\Support\Facades\Bus;
 
Bus::chain([
Bus::batch([
new PrepareExportBatchJob(
$this->query,
$this->mapper,
$fileName,
),
]),
new SendExportNotificationJob(
$fileName,
$this->user,
),
])->dispatch();

The PrepareExportBatchJob job will accept the serialized query, serialized mapper, and file name for the export:

use Laravel\SerializableClosure\SerializableClosure;
 
public function __construct(
protected string $query,
protected SerializableClosure $mapper,
protected string $fileName,
) {}

In the handle() method of the job, we will unserialize the query and mapper, and then chunk the query and dispatch ExportChunkJob instances to export the chunks. The query is converted to a non-Eloquent query builder, since we do not need to hydrate any model instances if we just need to fetch the IDs of the records in the chunk. This makes the process even faster and less memory intensive. We also need to use the Arr::pluck() method to convert the array of database row objects into an array of IDs:

use AnourValar\EloquentSerialize\Facades\EloquentSerializeFacade;
use App\Jobs\ExportChunkJob;
use Illuminate\Support\Arr;
use Illuminate\Support\Collection;
 
$query = EloquentSerializeFacade::unserialize($this->query);
 
$keyName = $query->getModel()->getKeyName();
$qualifiedKeyName = $query->getModel()->qualifyColumn($keyName);
 
$baseQuery = $query->toBase();
$baseQuery->distinct($qualifiedKeyName);
 
$baseQuery
->select([$qualifiedKeyName])
->chunkById(
100,
function (Collection $records) use ($keyName) {
$this->batch()->add(new ExportChunkJob(
query: $this->query,
records: Arr::pluck($records->all(), $keyName),
mapper: $this->mapper,
fileName: $this->fileName,
));
},
column: $qualifiedKeyName,
alias: $keyName,
);

The reason why we are fetching the IDs of the records in each chunk and passing them to the job, instead of passing the job a "page number" which it can then use to paginate the query using offset and limit, is because it can result in issues with duplicate or missing records. This is because the database can be updated between the time the query is chunked and the time the chunk is processed, and the records that were in the first chunk may not be in the second chunk, or may be in both chunks. The chunkById() method prevents this by fetching each page based on the primary keys of the records at the edges of each chunk.

The ExportChunkJob accepts the serialized query, the record IDs to export, the serialized mapper, and the file name for the export:

use Laravel\SerializableClosure\SerializableClosure;
 
public function __construct(
protected string $query,
protected array $records,
protected SerializableClosure $mapper,
protected string $fileName,
) {}

In the handle() method of the job, we will unserialize the query and mapper, and then fetch the records from the database and export them to the existing CSV file. Since there is no special encoding when writing to a CSV file unlike XLSX files, we can append the new rows to the existing file without corrupting it:

use AnourValar\EloquentSerialize\Facades\EloquentSerializeFacade;
use Illuminate\Support\Facades\Storage;
use League\Csv\Writer;
 
$query = EloquentSerializeFacade::unserialize($this->query);
 
$mapper = $this->mapper->getClosure();
 
$csv = Writer::createFromString();
$csv->insertAll($query->find($this->records)->map($mapper)->all());
 
Storage::append("exports/{$this->fileName}", $csv->toString());

Finally, the SendExportNotificationJob job will accept the file name for the export and the user to send the notification to:

use App\Models\User;
 
public function __construct(
protected string $fileName,
protected User $user,
) {}

In the handle() method of the job, we will generate a temporary signed URL for the file, and then send a notification to the user with a link to download the file:

use Filament\Notifications\Actions\Action;
use Filament\Notifications\Notification;
use Illuminate\Support\Facades\Storage;
 
$csvUrl = Storage::temporaryUrl("exports/{$this->fileName}", now()->addDay());
 
Notification::make()
->title('Export completed')
->actions([
Action::make('download')->markAsRead()->url($csvUrl),
])
->success()
->sendToDatabase($this->user);

Chunking to multiple files

The current system, even though it has evolved so much, still has flaws:

  • The order of jobs being executed in the batch is not guaranteed, so the chunks of the CSV file may not end up being in the correct order.

  • S3 does not offer a way to append rows to an existing file, so Laravel will internally download the entire file, append the new rows, and then upload the entire file again. This is inefficient, can be slow with large files, and can result in memory limit issues.

The way to solve these problems is to avoid creating just one singular CSV file, and instead creating multiple CSV files, one for each chunk of the query. We can then use a controller to stream the chunks in the correct order into one singular file, without ever loading them all into memory at once.

The file structure in S3 will look something like this:

exports/
users-h3b23pwa/
00000001.csv
00000002.csv
00000003.csv
00000004.csv
headers.csv
...
...

The numbered file names are padded with zeros so that they are in the correct order when sorted alphabetically. The headers.csv file contains the header row of the CSV file, and the numbered files contain the rows of the CSV file.

Let's start by creating the header file, in ExportJob. We should remove the .csv from the $fileName variables and properties everywhere, so we can append the chunk file name onto the end easier.

use Illuminate\Contracts\Filesystem\Filesystem;
use Illuminate\Support\Facades\Storage;
use Illuminate\Support\Str;
 
$fileName = (string) str($query->getModel()::class)
->classBasename()
->plural()
->snake()
->append('-')
->append(Str::random());
 
Storage::put("exports/{$fileName}/headers.csv", $csv->toString(), options: Filesystem::VISIBILITY_PRIVATE);

In the PrepareExportBatchJob, we should keep track of the $page number of the current chunk, and pass that as a parameter to the constructor of ExportChunkJob:

use App\Jobs\ExportChunkJob;
use Illuminate\Support\Arr;
use Illuminate\Support\Collection;
 
$page = 1;
 
$baseQuery
->select([$qualifiedKeyName])
->chunkById(
100,
function (Collection $records) use ($keyName, &$page) {
$this->batch()->add(new ExportChunkJob(
query: $this->query,
records: Arr::pluck($records->all(), $keyName),
page: $page,
mapper: $this->mapper,
fileName: $this->fileName,
));
 
$page++;
},
column: $qualifiedKeyName,
alias: $keyName,
);

The $page variable is passed into the function by reference (&$page) so that it can be incremented each time a chunk is processed, otherwise the value will not be persisted between calls to the function.

The new constructor for ExportChunkJob will accept the page number:

use Laravel\SerializableClosure\SerializableClosure;
 
public function __construct(
protected string $query,
protected array $records,
protected int $page,
protected SerializableClosure $mapper,
protected string $fileName,
) {}

We now need to handle the fact that the file has been split into multiple chunks using a controller. Let's create an invokable controller called DownloadExportController:

php artisan make:controller DownloadExportController --invokable

In the DownloadExportController, we should double-check that the directory containing the exported files still exists, in case you wish to implement some automatic cleanup of old exports. Then, we can fetch each file in the correct order, echo it to the response, and then flush the system output buffer:

use Illuminate\Support\Facades\Storage;
use Symfony\Component\HttpFoundation\StreamedResponse;
 
public function __invoke(string $fileName): StreamedResponse
{
if (! Storage::exists("exports/{$fileName}")) {
abort(419);
}
 
return response()->streamDownload(function () use ($fileName) {
echo Storage::get("exports/{$fileName}/headers.csv");
 
flush();
 
foreach (Storage::files("exports/{$fileName}") as $file) {
if (str($file)->endsWith('headers.csv')) {
continue;
}
 
if (! str($file)->endsWith('.csv')) {
continue;
}
 
echo Storage::get($file);
 
flush();
}
}, "{$fileName}.csv", [
'Content-Type' => 'text/csv',
]);
}

Let's register a route to the controller, and ensure that the route is protected by a signature, so that the user must have a valid signed URL to access the exported file:

use App\Http\Controllers\DownloadExportController;
use Illuminate\Support\Facades\Route;
 
Route::get('/exports/{fileName}', DownloadExportController::class)
->middleware(['signed'])
->name('exports.show');

Finally, update the URL generation in the SendExportNotificationJob to use the new route:

use Illuminate\Support\Facades\URL;
 
$csvUrl = URL::signedRoute('exports.show', ['fileName' => $this->fileName]);

Bonus: Exporting to XLSX

The system we have built so far is designed to export to CSV files, but we can also export to XLSX files. To do so, you can chain an extra job into the chain, which runs after the batch, which collates the chunks into one singular XLSX file. There is no way to stream multiple XLSX files into one file in the same way as we can with CSV files, so you need to be aware of the memory limits of your server when using this method, or implement a way for the user to download multiple XLSX chunks in separate files. I would advise using the OpenSpout package for generating XLSX. The alternatives like PhpSpreadsheet, while more feature complete, are reported to not be as fast.

Conclusion

If you'd like to see the entire code for the final system, you can find it here. The commits in the repository mostly follow the order of the steps in this article, so you can see the changes that were made to the code to implement each feature.

It's straightforward to build a simple CSV export system. But like many things, the more data you need to handle, the more complex the system needs to become to handle this efficiently. I hope that I have demonstrated a good approach to solving that problem.

If you're looking for a pre-built export system that follows these concepts, and ships with a nice fluent API for defining exports, I maintain a project called Filament which has a built-in export system that follows these concepts. It's a Laravel package that many developers use to build administration panels for their applications, but the components can be used in your own Blade-based applications too. You can find the documentation for the Export action, as well as how to implement this in Livewire.

Dan Harrin
Software Developer
Author Image

Interested in speaking with a developer?

Connect with us.
©2024 Kirschbaum Development Group LLC Privacy Policy Terms of Service