The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum algorithm described as ansatzes that represent both the problem and the mixer Hamiltonians. Both are parameterizable unitary transformations executed on a quantum machine/simulator and whose parameters are iteratively optimized using a classical device to optimize the problem’s expectation value. To do so, in each QAOA iteration, most of the literature uses a quantum machine/simulator to measure the QAOA outcomes. However, this poses a severe bottleneck considering that quantum machines are hardly constrained (e.g. long queuing, limited qubits, etc.), likewise, quantum simulation also induces exponentially-increasing memory usage when dealing with large problems requiring more qubits. These limitations make today’s QAOA implementation impractical since it is hard to obtain good solutions with a reasonably-acceptable time/resources. Considering these facts, this work presents a new approach with two main contributions, including (I) removing the need for accessing quantum devices or large-sized classical machines during the QAOA optimization phase, and (II) ensuring that when dealing with some 𝑘-bounded pseudo-Boolean problems, optimizing the exact problem’s expectation value can be done in polynomial time using a classical computer.