Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NVQC Optimizations for VQE (C++ and Python) #1901

Merged
merged 70 commits into from
Jul 15, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
f70e94c
launchVQE stubs
bmhowe23 Jun 5, 2024
1c57ec9
Continue plumbing - first exec done (but as observe, not VQE)
bmhowe23 Jun 5, 2024
93f7737
Add optimizer serialization (untested)
bmhowe23 Jun 6, 2024
b820966
Add a few from_json methods
bmhowe23 Jun 6, 2024
c426348
Tweaks; parses on server now
bmhowe23 Jun 6, 2024
32a2d8d
Runs VQE on remote server (but no answers sent back yet)
bmhowe23 Jun 6, 2024
a51208c
Add cudaq::optimization_result to ExecutionContext to save results
bmhowe23 Jun 6, 2024
65324b4
Finish up answer plumbing
bmhowe23 Jun 6, 2024
9245c8e
Fix a FIXME
bmhowe23 Jun 6, 2024
b2808a1
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jun 6, 2024
66ac7ac
Remove comments to make spellchecker happy
bmhowe23 Jun 6, 2024
7ba14d4
Change isRemote() to false for remote simulator to make tests happy […
bmhowe23 Jun 6, 2024
9b39c6d
Comment out the new isRemote() function
bmhowe23 Jun 6, 2024
78ca1a5
Create supports_remote_vqe/supportsRemoteVQE instead of using is_remote
bmhowe23 Jun 7, 2024
343df8a
Add startingArgIdx to synthesizer and update VQE to use it with tuples
bmhowe23 Jun 7, 2024
b514c34
Use serializeArgs instead of old kludgy way
bmhowe23 Jun 7, 2024
eb68406
Update to variadic template arguments
bmhowe23 Jun 7, 2024
0e67fe2
Add initial gradient ser/deser
bmhowe23 Jun 8, 2024
717383e
Update gradient code to allow alternative to argMapper.
bmhowe23 Jun 8, 2024
8de6217
Fix simulator handling for VQE on server
bmhowe23 Jun 9, 2024
92773c4
Implement the rest of gradients (seems to be working now)
bmhowe23 Jun 9, 2024
2eb3a90
Misc cleanup / comments
bmhowe23 Jun 10, 2024
6c5bacd
Continued cleanup; reference --> pointer
bmhowe23 Jun 10, 2024
fc7f0c2
Add ability to clone gradients. Update vqe.h accordingly.
bmhowe23 Jun 10, 2024
b713834
Add const qualifiers
bmhowe23 Jun 10, 2024
5203e2c
Clean up JSON structure with optional fields
bmhowe23 Jun 10, 2024
99250c2
Cleanup and revert unnecessary test change
bmhowe23 Jun 10, 2024
d5872cf
Stash test change
bmhowe23 Jun 10, 2024
c9af65e
Merge branch 'main' into pr-cpp-vqe4nvqc (checked build, not test)
bmhowe23 Jun 26, 2024
cb29f7e
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 1, 2024
ec4b50d
Fix merge
bmhowe23 Jul 1, 2024
61bacc4
clang-format
bmhowe23 Jul 1, 2024
15fa38a
Fix library mode remote-sim tests
bmhowe23 Jul 1, 2024
a6b7b51
Fix test_remote_platform.py failures for state overlap
bmhowe23 Jul 1, 2024
00d68b7
Allow Python cudaq.vqe() to invoke new remote VQE code
bmhowe23 Jul 2, 2024
71578ae
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 2, 2024
de89f1b
Fix nvcc compilation issue and get_state_tester.cu test
bmhowe23 Jul 2, 2024
645ed22
Handle the case when there is no argMapper
bmhowe23 Jul 2, 2024
fc4c9a0
Fix some C++ tests by undoing gradient changes that aren't needed
bmhowe23 Jul 2, 2024
e881340
Revert temporary vqe_h2.cpp changes
bmhowe23 Jul 3, 2024
e4ceaac
Revert "Revert temporary vqe_h2.cpp changes"
bmhowe23 Jul 3, 2024
c6ecddf
Revert "Fix some C++ tests by undoing gradient changes that aren't ne…
bmhowe23 Jul 3, 2024
bd3de82
Let's try this again - alternate fix for C++ tests
bmhowe23 Jul 3, 2024
3879346
Fix docs issue
bmhowe23 Jul 3, 2024
3e16756
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 3, 2024
45437db
Refine C++ changes to limit changes from original baseline
bmhowe23 Jul 3, 2024
e15974f
Guard NVCF feature since it is not released yet
bmhowe23 Jul 3, 2024
0aedecf
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 3, 2024
1f77862
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 5, 2024
c4e1d79
Add some C++ tests
bmhowe23 Jul 5, 2024
fccb5a6
Fix bug in some VQE code paths for C++
bmhowe23 Jul 5, 2024
757f2f2
Format
bmhowe23 Jul 5, 2024
540d28c
Update maxcut example
bmhowe23 Jul 5, 2024
307cd21
Add comments for new parameters
bmhowe23 Jul 5, 2024
ae3122b
VQE sendRequest updates for NVQC
bmhowe23 Jul 5, 2024
129b56a
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 5, 2024
03a6878
Fix C++17 test failures
bmhowe23 Jul 7, 2024
8e5ffc1
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 7, 2024
e18aea0
Check if kernel args are compatible with optimal execution
bmhowe23 Jul 8, 2024
4d2b03f
Fixup - Fix C++17 test failures
bmhowe23 Jul 8, 2024
3b33e8a
Another C++17 workaround
bmhowe23 Jul 8, 2024
a7cbaf2
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 8, 2024
0ce2587
Address PR comments
bmhowe23 Jul 9, 2024
beb48eb
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 10, 2024
b9ac638
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 12, 2024
7a3c6df
Move remote_vqe helper function into internal namespace
bmhowe23 Jul 12, 2024
bc2c39f
Move duplicated warning message into helper function
bmhowe23 Jul 12, 2024
253f431
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 12, 2024
336c908
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 15, 2024
76e29a7
Merge branch 'main' into pr-cpp-vqe4nvqc
bmhowe23 Jul 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Address PR comments
  • Loading branch information
bmhowe23 committed Jul 9, 2024
commit 0ce2587951a991e02c46f6468096c61050fa4a62
8 changes: 3 additions & 5 deletions lib/Optimizer/Transforms/QuakeSynthesizer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -455,11 +455,9 @@ class QuakeSynthesizer
// Keep track of the stdVec sizes.
std::vector<std::tuple<std::size_t, Type, std::uint64_t>> stdVecInfo;

for (auto iter : llvm::enumerate(arguments)) {
if (iter.index() < startingArgIdx)
continue;
auto argNum = iter.index();
auto argument = iter.value();
for (std::size_t argNum = startingArgIdx, end = arguments.size();
argNum < end; argNum++) {
auto argument = arguments[argNum];
std::size_t offset = structLayout.second[argNum - startingArgIdx];

// Get the argument type
Expand Down
26 changes: 13 additions & 13 deletions python/runtime/cudaq/algorithms/py_vqe.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -63,14 +63,21 @@ bool isArgumentStdVec(MlirModule &module, const std::string &kernelName,
return isa<cudaq::cc::StdvecType>(kernel.getArgument(argIdx).getType());
}

/// @brief Run `cudaq::observe` on the provided kernel and spin operator.
observe_result pyObserve(py::object &kernel, spin_op &spin_operator,
py::args args, const int shots,
bool argMapperProvided = false) {
/// @brief Return the kernel name and MLIR module for a kernel.
static inline std::pair<std::string, MlirModule>
getKernelNameAndModule(py::object &kernel) {
if (py::hasattr(kernel, "compile"))
kernel.attr("compile")();
auto kernelName = kernel.attr("name").cast<std::string>();
auto kernelMod = kernel.attr("module").cast<MlirModule>();
bmhowe23 marked this conversation as resolved.
Show resolved Hide resolved
return std::make_pair(kernelName, kernelMod);
}

/// @brief Run `cudaq::observe` on the provided kernel and spin operator.
observe_result pyObserve(py::object &kernel, spin_op &spin_operator,
py::args args, const int shots,
bool argMapperProvided = false) {
auto [kernelName, kernelMod] = getKernelNameAndModule(kernel);
auto &platform = cudaq::get_platform();
args = simplifiedValidateInputArguments(args);
auto *argData = toOpaqueArgs(args, kernelMod, kernelName);
Expand Down Expand Up @@ -108,10 +115,7 @@ observe_result pyObserve(py::object &kernel, spin_op &spin_operator,
/// implementation that requires the variation parameters to be the first
/// argument in the kernel.
static bool firstArgIsCompatibleWithRemoteVQE(py::object &kernel) {
if (py::hasattr(kernel, "compile"))
kernel.attr("compile")();
auto kernelName = kernel.attr("name").cast<std::string>();
auto kernelMod = kernel.attr("module").cast<MlirModule>();
auto [kernelName, kernelMod] = getKernelNameAndModule(kernel);
auto kernelFunc = getKernelFuncOp(kernelMod, kernelName);
if (kernelFunc.getNumArguments() < 1)
return false;
Expand All @@ -132,11 +136,7 @@ pyVQE_remote_cpp(cudaq::quantum_platform &platform, py::object &kernel,
spin_op &hamiltonian, cudaq::optimizer &optimizer,
cudaq::gradient *gradient, py::function *argumentMapper,
const int n_params, const int shots) {

if (py::hasattr(kernel, "compile"))
kernel.attr("compile")();
auto kernelName = kernel.attr("name").cast<std::string>();
auto kernelMod = kernel.attr("module").cast<MlirModule>();
auto [kernelName, kernelMod] = getKernelNameAndModule(kernel);
auto ctx = std::make_unique<ExecutionContext>("observe", /*shots=*/0);
ctx->kernelName = kernelName;
ctx->spin = &hamiltonian;
Expand Down
26 changes: 5 additions & 21 deletions python/runtime/utils/PyRemoteSimulatorQPU.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,7 @@ class PyRemoteSimulatorQPU : public cudaq::BaseRemoteSimulatorQPU {
cudaq::optimizer &optimizer, const int n_params,
const std::size_t shots) override {
cudaq::ExecutionContext *executionContextPtr =
[&]() -> cudaq::ExecutionContext * {
std::scoped_lock<std::mutex> lock(m_contextMutex);
const auto iter = m_contexts.find(std::this_thread::get_id());
if (iter == m_contexts.end())
return nullptr;
return iter->second;
}();
getExecutionContextForMyThread();

auto *wrapper = reinterpret_cast<const cudaq::ArgWrapper *>(kernelArgs);
auto m_module = wrapper->mod;
Expand Down Expand Up @@ -70,13 +64,8 @@ class PyRemoteSimulatorQPU : public cudaq::BaseRemoteSimulatorQPU {
auto *mlirContext = m_module->getContext();

cudaq::ExecutionContext *executionContextPtr =
[&]() -> cudaq::ExecutionContext * {
std::scoped_lock<std::mutex> lock(m_contextMutex);
const auto iter = m_contexts.find(std::this_thread::get_id());
if (iter == m_contexts.end())
return nullptr;
return iter->second;
}();
getExecutionContextForMyThread();

// Default context for a 'fire-and-ignore' kernel launch; i.e., no context
// was set before launching the kernel. Use a static variable per thread to
// set up a single-shot execution context for this case.
Expand Down Expand Up @@ -120,13 +109,8 @@ class PyNvcfSimulatorQPU : public cudaq::BaseNvcfSimulatorQPU {
auto *mlirContext = m_module->getContext();

cudaq::ExecutionContext *executionContextPtr =
[&]() -> cudaq::ExecutionContext * {
std::scoped_lock<std::mutex> lock(m_contextMutex);
const auto iter = m_contexts.find(std::this_thread::get_id());
if (iter == m_contexts.end())
return nullptr;
return iter->second;
}();
getExecutionContextForMyThread();

// Default context for a 'fire-and-ignore' kernel launch; i.e., no context
// was set before launching the kernel. Use a static variable per thread to
// set up a single-shot execution context for this case.
Expand Down
34 changes: 13 additions & 21 deletions runtime/common/BaseRemoteSimulatorQPU.h
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,16 @@ class BaseRemoteSimulatorQPU : public cudaq::QPU {
std::unique_ptr<mlir::MLIRContext> m_mlirContext;
std::unique_ptr<cudaq::RemoteRuntimeClient> m_client;

/// @brief Return a pointer to the execution context for this thread. It will
/// return `nullptr` if it was not found in `m_contexts`.
cudaq::ExecutionContext *getExecutionContextForMyThread() {
std::scoped_lock<std::mutex> lock(m_contextMutex);
const auto iter = m_contexts.find(std::this_thread::get_id());
if (iter == m_contexts.end())
return nullptr;
return iter->second;
}

public:
BaseRemoteSimulatorQPU()
: QPU(),
Expand Down Expand Up @@ -100,13 +110,7 @@ class BaseRemoteSimulatorQPU : public cudaq::QPU {
cudaq::optimizer &optimizer, const int n_params,
const std::size_t shots) override {
cudaq::ExecutionContext *executionContextPtr =
[&]() -> cudaq::ExecutionContext * {
std::scoped_lock<std::mutex> lock(m_contextMutex);
const auto iter = m_contexts.find(std::this_thread::get_id());
if (iter == m_contexts.end())
return nullptr;
return iter->second;
}();
getExecutionContextForMyThread();

if (executionContextPtr && executionContextPtr->name == "tracer")
return;
Expand Down Expand Up @@ -135,13 +139,7 @@ class BaseRemoteSimulatorQPU : public cudaq::QPU {
name, qpu_id, m_simName);

cudaq::ExecutionContext *executionContextPtr =
[&]() -> cudaq::ExecutionContext * {
std::scoped_lock<std::mutex> lock(m_contextMutex);
const auto iter = m_contexts.find(std::this_thread::get_id());
if (iter == m_contexts.end())
return nullptr;
return iter->second;
}();
getExecutionContextForMyThread();

if (executionContextPtr && executionContextPtr->name == "tracer") {
return;
Expand Down Expand Up @@ -173,13 +171,7 @@ class BaseRemoteSimulatorQPU : public cudaq::QPU {
name, qpu_id, m_simName);

cudaq::ExecutionContext *executionContextPtr =
[&]() -> cudaq::ExecutionContext * {
std::scoped_lock<std::mutex> lock(m_contextMutex);
const auto iter = m_contexts.find(std::this_thread::get_id());
if (iter == m_contexts.end())
return nullptr;
return iter->second;
}();
getExecutionContextForMyThread();

if (executionContextPtr && executionContextPtr->name == "tracer") {
return;
Expand Down
70 changes: 31 additions & 39 deletions runtime/cudaq/algorithms/vqe.h
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,28 @@

namespace cudaq {

/// \brief This is an internal helper function to reduce duplicated code in the
/// user-facing `vqe()` functions below. Users should not directly call this
/// function.
bmhowe23 marked this conversation as resolved.
Show resolved Hide resolved
template <typename QuantumKernel, typename... Args,
typename = std::enable_if_t<
std::is_invocable_v<QuantumKernel, std::vector<double>, Args...>>>
static inline optimization_result
remote_vqe(cudaq::quantum_platform &platform, QuantumKernel &&kernel,
cudaq::spin_op &H, cudaq::optimizer &optimizer,
cudaq::gradient *gradient, const int n_params,
const std::size_t shots, Args &&...args) {
auto ctx = std::make_unique<ExecutionContext>("observe", shots);
ctx->kernelName = cudaq::getKernelName(kernel);
ctx->spin = &H;
platform.set_exec_ctx(ctx.get());
auto serializedArgsBuffer = serializeArgs(args...);
platform.launchVQE(ctx->kernelName, serializedArgsBuffer.data(), gradient, H,
optimizer, n_params, shots);
platform.reset_exec_ctx();
return ctx->optResult.value_or(optimization_result{});
}

///
/// \brief Compute the minimal eigenvalue of \p H with VQE.
///
Expand Down Expand Up @@ -73,19 +95,9 @@ optimization_result vqe(QuantumKernel &&kernel, cudaq::spin_op H,
}

auto &platform = cudaq::get_platform();
if (platform.supports_remote_vqe()) {
auto ctx = std::make_unique<ExecutionContext>("observe", /*shots=*/0);
ctx->kernelName = cudaq::getKernelName(kernel);
ctx->spin = &H;
platform.set_exec_ctx(ctx.get());
auto serializedArgsBuffer = serializeArgs(args...);
platform.launchVQE(cudaq::getKernelName(kernel),
/*kernelArgs=*/serializedArgsBuffer.data(),
/*gradient=*/nullptr, H, optimizer, n_params,
/*shots=*/0);
platform.reset_exec_ctx();
return ctx->optResult.value_or(optimization_result{});
}
if (platform.supports_remote_vqe())
return remote_vqe(platform, kernel, H, optimizer, /*gradient=*/nullptr,
n_params, /*shots=*/0, args...);

return optimizer.optimize(n_params, [&](const std::vector<double> &x,
std::vector<double> &grad_vec) {
Expand Down Expand Up @@ -153,19 +165,9 @@ optimization_result vqe(std::size_t shots, QuantumKernel &&kernel,
}

auto &platform = cudaq::get_platform();
if (platform.supports_remote_vqe()) {
auto ctx = std::make_unique<ExecutionContext>("observe", /*shots=*/shots);
ctx->kernelName = cudaq::getKernelName(kernel);
ctx->spin = &H;
platform.set_exec_ctx(ctx.get());
auto serializedArgsBuffer = serializeArgs(args...);
platform.launchVQE(cudaq::getKernelName(kernel),
serializedArgsBuffer.data(), /*gradient=*/nullptr, H,
optimizer, n_params,
/*shots=*/shots);
platform.reset_exec_ctx();
return ctx->optResult.value_or(optimization_result{});
}
if (platform.supports_remote_vqe())
return remote_vqe(platform, kernel, H, optimizer, /*gradient=*/nullptr,
n_params, shots, args...);

return optimizer.optimize(n_params, [&](const std::vector<double> &x,
std::vector<double> &grad_vec) {
Expand Down Expand Up @@ -234,19 +236,9 @@ optimization_result vqe(QuantumKernel &&kernel, cudaq::gradient &gradient,
"std::tuple<Args...>(std::vector<double>) ArgMapper function object.");

auto &platform = cudaq::get_platform();
if (platform.supports_remote_vqe()) {
auto ctx = std::make_unique<ExecutionContext>("observe", /*shots=*/0);
ctx->kernelName = cudaq::getKernelName(kernel);
ctx->spin = &H;
platform.set_exec_ctx(ctx.get());
auto serializedArgsBuffer = serializeArgs(args...);
platform.launchVQE(cudaq::getKernelName(kernel),
/*kernelArgs=*/serializedArgsBuffer.data(), &gradient, H,
optimizer, n_params,
/*shots=*/0);
platform.reset_exec_ctx();
return ctx->optResult.value_or(optimization_result{});
}
if (platform.supports_remote_vqe())
return remote_vqe(platform, kernel, H, optimizer, &gradient, n_params,
/*shots=*/0, args...);

auto requires_grad = optimizer.requiresGradients();
// If there are additional arguments, we need to clone the gradient and
Expand Down
Loading