Not really.
An ASIC is an Application-Specific Integrated Circuit - a chip made to do just one thing, but do it efficiently.
GPUs, sometimes known as “Graphics Cards,” were first made to do images quickly, which sounds kinda ‘Application-Specific,’ but they do this by allowing for massive parallelization. Much of image processing (ray tracing, frame rendering, texture mapping) is very suitable for parallele processing. In Computing science, this is called “embarrassingly parallel,” but those are still just General tasks done in parallel.
Lots of tasks are embarassingly parallel: Embarrassingly parallel - Wikipedia
GPUs are used for flow modeling, 3-D modeling, neural networks, radio signal analysis (SETI), protein modeling (Folding), cancer and AIDS research (GPUGRID), vector math, engineering, scientific computing, self-driving cars, deep learning… and, oh yeah, cryptocurrency mining.
That’s not Application-Specific, it’s General!
GPUs are so much more flexible because, unlike an ASIC that does just one thing, a GPU presents an API to a programming language (like OpenCL or OpenMP) which lets the the software do many, many different things on the chip. Any non-trivial task that can be done in parallel (as opposed to being linear) can probably be assigned to the GPU and done faster than on a CPU.
I think of a GPU as a Parallel Processing Unit which was originally designed to process pixels in images, but is now usable for all sorts of general purposes.
Even an AI-Specific version of a GPU, like Google’s Tensor Processing Unit, is programmable for a variety of tasks!